Jan 19 13:03:24.566790 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 19 09:38:41 -00 2026 Jan 19 13:03:24.566846 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 13:03:24.566861 kernel: BIOS-provided physical RAM map: Jan 19 13:03:24.566872 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 19 13:03:24.566896 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 19 13:03:24.566907 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 19 13:03:24.566919 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 19 13:03:24.566952 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 19 13:03:24.566966 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 19 13:03:24.566976 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 19 13:03:24.566987 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 19 13:03:24.566998 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 19 13:03:24.567009 kernel: NX (Execute Disable) protection: active Jan 19 13:03:24.567034 kernel: APIC: Static calls initialized Jan 19 13:03:24.567057 kernel: SMBIOS 2.8 present. Jan 19 13:03:24.567071 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.16.0-3.module_el8.7.0+3346+68867adb 04/01/2014 Jan 19 13:03:24.567112 kernel: DMI: Memory slots populated: 1/1 Jan 19 13:03:24.567140 kernel: Hypervisor detected: KVM Jan 19 13:03:24.567153 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 19 13:03:24.567164 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 19 13:03:24.567176 kernel: kvm-clock: using sched offset of 5533367247 cycles Jan 19 13:03:24.567189 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 19 13:03:24.567201 kernel: tsc: Detected 2799.998 MHz processor Jan 19 13:03:24.567214 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 19 13:03:24.567226 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 19 13:03:24.567252 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 19 13:03:24.567265 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 19 13:03:24.567277 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 19 13:03:24.567289 kernel: Using GB pages for direct mapping Jan 19 13:03:24.567301 kernel: ACPI: Early table checksum verification disabled Jan 19 13:03:24.567313 kernel: ACPI: RSDP 0x00000000000F59E0 000014 (v00 BOCHS ) Jan 19 13:03:24.567325 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567337 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567363 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567375 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 19 13:03:24.567388 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567400 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567412 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567424 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 19 13:03:24.567437 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 19 13:03:24.567471 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 19 13:03:24.567484 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 19 13:03:24.567497 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 19 13:03:24.567509 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 19 13:03:24.567534 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 19 13:03:24.567547 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 19 13:03:24.567559 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 19 13:03:24.567572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 19 13:03:24.567585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 19 13:03:24.567597 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 19 13:03:24.567610 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 19 13:03:24.567635 kernel: Zone ranges: Jan 19 13:03:24.567648 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 19 13:03:24.567660 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 19 13:03:24.567688 kernel: Normal empty Jan 19 13:03:24.567701 kernel: Device empty Jan 19 13:03:24.567713 kernel: Movable zone start for each node Jan 19 13:03:24.567725 kernel: Early memory node ranges Jan 19 13:03:24.567738 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 19 13:03:24.567765 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 19 13:03:24.567830 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 19 13:03:24.567847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 19 13:03:24.567877 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 19 13:03:24.567891 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 19 13:03:24.567904 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 19 13:03:24.567932 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 19 13:03:24.567960 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 19 13:03:24.567974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 19 13:03:24.567986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 19 13:03:24.567999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 19 13:03:24.568011 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 19 13:03:24.568024 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 19 13:03:24.568036 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 19 13:03:24.568073 kernel: TSC deadline timer available Jan 19 13:03:24.568088 kernel: CPU topo: Max. logical packages: 16 Jan 19 13:03:24.568100 kernel: CPU topo: Max. logical dies: 16 Jan 19 13:03:24.568113 kernel: CPU topo: Max. dies per package: 1 Jan 19 13:03:24.568125 kernel: CPU topo: Max. threads per core: 1 Jan 19 13:03:24.568137 kernel: CPU topo: Num. cores per package: 1 Jan 19 13:03:24.568149 kernel: CPU topo: Num. threads per package: 1 Jan 19 13:03:24.568161 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 19 13:03:24.568188 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 19 13:03:24.568201 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 19 13:03:24.568214 kernel: Booting paravirtualized kernel on KVM Jan 19 13:03:24.568226 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 19 13:03:24.568239 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 19 13:03:24.568251 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 19 13:03:24.568264 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 19 13:03:24.568289 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 19 13:03:24.568302 kernel: kvm-guest: PV spinlocks enabled Jan 19 13:03:24.568315 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 19 13:03:24.568328 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 13:03:24.568348 kernel: random: crng init done Jan 19 13:03:24.568360 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 19 13:03:24.568382 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 19 13:03:24.568425 kernel: Fallback order for Node 0: 0 Jan 19 13:03:24.568448 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 19 13:03:24.568468 kernel: Policy zone: DMA32 Jan 19 13:03:24.568490 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 19 13:03:24.568512 kernel: software IO TLB: area num 16. Jan 19 13:03:24.568532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 19 13:03:24.568559 kernel: Kernel/User page tables isolation: enabled Jan 19 13:03:24.568598 kernel: ftrace: allocating 40128 entries in 157 pages Jan 19 13:03:24.568620 kernel: ftrace: allocated 157 pages with 5 groups Jan 19 13:03:24.568640 kernel: Dynamic Preempt: voluntary Jan 19 13:03:24.568662 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 19 13:03:24.574154 kernel: rcu: RCU event tracing is enabled. Jan 19 13:03:24.574170 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 19 13:03:24.574184 kernel: Trampoline variant of Tasks RCU enabled. Jan 19 13:03:24.574241 kernel: Rude variant of Tasks RCU enabled. Jan 19 13:03:24.574257 kernel: Tracing variant of Tasks RCU enabled. Jan 19 13:03:24.574270 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 19 13:03:24.574284 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 19 13:03:24.574297 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 19 13:03:24.574309 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 19 13:03:24.574322 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 19 13:03:24.574351 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 19 13:03:24.574366 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 19 13:03:24.574407 kernel: Console: colour VGA+ 80x25 Jan 19 13:03:24.574433 kernel: printk: legacy console [tty0] enabled Jan 19 13:03:24.574447 kernel: printk: legacy console [ttyS0] enabled Jan 19 13:03:24.574477 kernel: ACPI: Core revision 20240827 Jan 19 13:03:24.574492 kernel: APIC: Switch to symmetric I/O mode setup Jan 19 13:03:24.574505 kernel: x2apic enabled Jan 19 13:03:24.574519 kernel: APIC: Switched APIC routing to: physical x2apic Jan 19 13:03:24.574532 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 19 13:03:24.574560 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 19 13:03:24.574573 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 19 13:03:24.574587 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 19 13:03:24.574612 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 19 13:03:24.574625 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 19 13:03:24.574638 kernel: Spectre V2 : Mitigation: Retpolines Jan 19 13:03:24.574651 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 19 13:03:24.574687 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 19 13:03:24.574703 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 19 13:03:24.574716 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 19 13:03:24.574729 kernel: MDS: Mitigation: Clear CPU buffers Jan 19 13:03:24.574741 kernel: MMIO Stale Data: Unknown: No mitigations Jan 19 13:03:24.574754 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 19 13:03:24.574767 kernel: active return thunk: its_return_thunk Jan 19 13:03:24.574797 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 19 13:03:24.574812 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 19 13:03:24.574824 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 19 13:03:24.574837 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 19 13:03:24.574850 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 19 13:03:24.574863 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 19 13:03:24.574875 kernel: Freeing SMP alternatives memory: 32K Jan 19 13:03:24.574888 kernel: pid_max: default: 32768 minimum: 301 Jan 19 13:03:24.574900 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 19 13:03:24.574913 kernel: landlock: Up and running. Jan 19 13:03:24.574940 kernel: SELinux: Initializing. Jan 19 13:03:24.574953 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 19 13:03:24.574966 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 19 13:03:24.574979 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 19 13:03:24.574992 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 19 13:03:24.575006 kernel: signal: max sigframe size: 1776 Jan 19 13:03:24.575038 kernel: rcu: Hierarchical SRCU implementation. Jan 19 13:03:24.575064 kernel: rcu: Max phase no-delay instances is 400. Jan 19 13:03:24.575078 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 19 13:03:24.575107 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 19 13:03:24.575120 kernel: smp: Bringing up secondary CPUs ... Jan 19 13:03:24.575134 kernel: smpboot: x86: Booting SMP configuration: Jan 19 13:03:24.575147 kernel: .... node #0, CPUs: #1 Jan 19 13:03:24.575160 kernel: smp: Brought up 1 node, 2 CPUs Jan 19 13:03:24.575173 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 19 13:03:24.575197 kernel: Memory: 1912056K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15540K init, 2496K bss, 178544K reserved, 0K cma-reserved) Jan 19 13:03:24.575224 kernel: devtmpfs: initialized Jan 19 13:03:24.575238 kernel: x86/mm: Memory block size: 128MB Jan 19 13:03:24.575251 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 19 13:03:24.575265 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 19 13:03:24.575278 kernel: pinctrl core: initialized pinctrl subsystem Jan 19 13:03:24.575291 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 19 13:03:24.575304 kernel: audit: initializing netlink subsys (disabled) Jan 19 13:03:24.575331 kernel: audit: type=2000 audit(1768827800.454:1): state=initialized audit_enabled=0 res=1 Jan 19 13:03:24.575345 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 19 13:03:24.575358 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 19 13:03:24.575371 kernel: cpuidle: using governor menu Jan 19 13:03:24.575384 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 19 13:03:24.575397 kernel: dca service started, version 1.12.1 Jan 19 13:03:24.575429 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 19 13:03:24.575457 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 19 13:03:24.575471 kernel: PCI: Using configuration type 1 for base access Jan 19 13:03:24.575485 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 19 13:03:24.575498 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 19 13:03:24.575511 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 19 13:03:24.575525 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 19 13:03:24.575538 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 19 13:03:24.575564 kernel: ACPI: Added _OSI(Module Device) Jan 19 13:03:24.575578 kernel: ACPI: Added _OSI(Processor Device) Jan 19 13:03:24.575591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 19 13:03:24.575604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 19 13:03:24.575617 kernel: ACPI: Interpreter enabled Jan 19 13:03:24.575631 kernel: ACPI: PM: (supports S0 S5) Jan 19 13:03:24.575644 kernel: ACPI: Using IOAPIC for interrupt routing Jan 19 13:03:24.575684 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 19 13:03:24.575698 kernel: PCI: Using E820 reservations for host bridge windows Jan 19 13:03:24.575712 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 19 13:03:24.575725 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 19 13:03:24.576125 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 19 13:03:24.576367 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 19 13:03:24.576629 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 19 13:03:24.576649 kernel: PCI host bridge to bus 0000:00 Jan 19 13:03:24.576917 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 19 13:03:24.577143 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 19 13:03:24.577359 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 19 13:03:24.577571 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 19 13:03:24.578924 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 19 13:03:24.579153 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 19 13:03:24.579364 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 19 13:03:24.579679 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 19 13:03:24.579925 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 19 13:03:24.580188 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 19 13:03:24.580519 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 19 13:03:24.580912 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 19 13:03:24.581153 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 19 13:03:24.581410 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.581639 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 19 13:03:24.581914 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 19 13:03:24.582168 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 19 13:03:24.582394 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 19 13:03:24.582616 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.582998 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.583285 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 19 13:03:24.583570 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 19 13:03:24.583820 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 19 13:03:24.584057 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 19 13:03:24.584321 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.584592 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 19 13:03:24.584878 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 19 13:03:24.585150 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 19 13:03:24.585404 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 19 13:03:24.585651 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.585901 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 19 13:03:24.586140 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 19 13:03:24.586388 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 19 13:03:24.586614 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 19 13:03:24.586923 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.587164 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 19 13:03:24.587389 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 19 13:03:24.587614 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 19 13:03:24.587880 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 19 13:03:24.588141 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.588367 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 19 13:03:24.588591 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 19 13:03:24.588849 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 19 13:03:24.589132 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 19 13:03:24.589422 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.589649 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 19 13:03:24.589895 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 19 13:03:24.590181 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 19 13:03:24.590408 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 19 13:03:24.590712 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 19 13:03:24.590948 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 19 13:03:24.591187 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 19 13:03:24.591436 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 19 13:03:24.591659 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 19 13:03:24.591945 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 19 13:03:24.592212 kernel: pci 0000:00:03.0: BAR 0 [io 0xd0c0-0xd0df] Jan 19 13:03:24.592438 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 19 13:03:24.592693 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 19 13:03:24.592948 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 19 13:03:24.593227 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 19 13:03:24.593472 kernel: pci 0000:00:04.0: BAR 0 [io 0xd000-0xd07f] Jan 19 13:03:24.593737 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 19 13:03:24.593965 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 19 13:03:24.594221 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 19 13:03:24.594492 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 19 13:03:24.594801 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 19 13:03:24.595099 kernel: pci 0000:00:1f.2: BAR 4 [io 0xd0e0-0xd0ff] Jan 19 13:03:24.595326 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 19 13:03:24.595606 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 19 13:03:24.595877 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 19 13:03:24.596175 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 19 13:03:24.596409 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 19 13:03:24.596661 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 19 13:03:24.596945 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 19 13:03:24.597189 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 19 13:03:24.597419 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.597646 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 19 13:03:24.597905 kernel: pci_bus 0000:02: extended config space not accessible Jan 19 13:03:24.598190 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 19 13:03:24.598440 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 19 13:03:24.598719 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 19 13:03:24.599013 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 19 13:03:24.599256 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 19 13:03:24.599504 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 19 13:03:24.599776 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 19 13:03:24.600011 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 19 13:03:24.600249 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 19 13:03:24.600497 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 19 13:03:24.600751 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 19 13:03:24.601014 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 19 13:03:24.601264 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 19 13:03:24.601489 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 19 13:03:24.601510 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 19 13:03:24.601562 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 19 13:03:24.601578 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 19 13:03:24.601608 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 19 13:03:24.601623 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 19 13:03:24.601636 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 19 13:03:24.601649 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 19 13:03:24.601677 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 19 13:03:24.601692 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 19 13:03:24.601706 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 19 13:03:24.601735 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 19 13:03:24.601749 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 19 13:03:24.601763 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 19 13:03:24.601776 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 19 13:03:24.601789 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 19 13:03:24.601803 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 19 13:03:24.601816 kernel: iommu: Default domain type: Translated Jan 19 13:03:24.601845 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 19 13:03:24.601859 kernel: PCI: Using ACPI for IRQ routing Jan 19 13:03:24.601873 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 19 13:03:24.601886 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 19 13:03:24.601899 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 19 13:03:24.602143 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 19 13:03:24.602368 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 19 13:03:24.602611 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 19 13:03:24.602631 kernel: vgaarb: loaded Jan 19 13:03:24.602645 kernel: clocksource: Switched to clocksource kvm-clock Jan 19 13:03:24.602658 kernel: VFS: Disk quotas dquot_6.6.0 Jan 19 13:03:24.602687 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 19 13:03:24.602701 kernel: pnp: PnP ACPI init Jan 19 13:03:24.602965 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 19 13:03:24.603003 kernel: pnp: PnP ACPI: found 5 devices Jan 19 13:03:24.603017 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 19 13:03:24.603031 kernel: NET: Registered PF_INET protocol family Jan 19 13:03:24.603044 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 19 13:03:24.603069 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 19 13:03:24.603083 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 19 13:03:24.603096 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 19 13:03:24.603126 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 19 13:03:24.603140 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 19 13:03:24.603153 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 19 13:03:24.603167 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 19 13:03:24.603181 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 19 13:03:24.603194 kernel: NET: Registered PF_XDP protocol family Jan 19 13:03:24.603419 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 19 13:03:24.603687 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 19 13:03:24.603919 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 19 13:03:24.604164 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 19 13:03:24.604420 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 19 13:03:24.604679 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 19 13:03:24.604910 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 19 13:03:24.605170 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff]: assigned Jan 19 13:03:24.605395 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff]: assigned Jan 19 13:03:24.605618 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff]: assigned Jan 19 13:03:24.605858 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff]: assigned Jan 19 13:03:24.606095 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff]: assigned Jan 19 13:03:24.606320 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff]: assigned Jan 19 13:03:24.606576 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff]: assigned Jan 19 13:03:24.606913 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 19 13:03:24.607228 kernel: pci 0000:01:00.0: bridge window [io 0xc000-0xcfff] Jan 19 13:03:24.607485 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 19 13:03:24.607797 kernel: pci 0000:01:00.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.608028 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 19 13:03:24.608274 kernel: pci 0000:00:02.0: bridge window [io 0xc000-0xcfff] Jan 19 13:03:24.608500 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 19 13:03:24.608780 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.609008 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 19 13:03:24.609246 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x1fff] Jan 19 13:03:24.609471 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 19 13:03:24.609714 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 19 13:03:24.609963 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 19 13:03:24.610203 kernel: pci 0000:00:02.2: bridge window [io 0x2000-0x2fff] Jan 19 13:03:24.610427 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 19 13:03:24.610650 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 19 13:03:24.610904 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 19 13:03:24.611143 kernel: pci 0000:00:02.3: bridge window [io 0x3000-0x3fff] Jan 19 13:03:24.611390 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 19 13:03:24.611618 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 19 13:03:24.611863 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 19 13:03:24.612106 kernel: pci 0000:00:02.4: bridge window [io 0x4000-0x4fff] Jan 19 13:03:24.612331 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 19 13:03:24.612607 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 19 13:03:24.612903 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 19 13:03:24.613144 kernel: pci 0000:00:02.5: bridge window [io 0x5000-0x5fff] Jan 19 13:03:24.613370 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 19 13:03:24.613592 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 19 13:03:24.613834 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 19 13:03:24.614071 kernel: pci 0000:00:02.6: bridge window [io 0x6000-0x6fff] Jan 19 13:03:24.614317 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 19 13:03:24.614547 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 19 13:03:24.614801 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 19 13:03:24.615026 kernel: pci 0000:00:02.7: bridge window [io 0x7000-0x7fff] Jan 19 13:03:24.615262 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 19 13:03:24.615485 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 19 13:03:24.615771 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 19 13:03:24.616012 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 19 13:03:24.616238 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 19 13:03:24.616445 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 19 13:03:24.616653 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 19 13:03:24.616913 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 19 13:03:24.617173 kernel: pci_bus 0000:01: resource 0 [io 0xc000-0xcfff] Jan 19 13:03:24.617408 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 19 13:03:24.617622 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.617866 kernel: pci_bus 0000:02: resource 0 [io 0xc000-0xcfff] Jan 19 13:03:24.618101 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 19 13:03:24.618323 kernel: pci_bus 0000:02: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 19 13:03:24.618549 kernel: pci_bus 0000:03: resource 0 [io 0x1000-0x1fff] Jan 19 13:03:24.618804 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 19 13:03:24.619020 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 19 13:03:24.619270 kernel: pci_bus 0000:04: resource 0 [io 0x2000-0x2fff] Jan 19 13:03:24.619523 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 19 13:03:24.619779 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 19 13:03:24.620026 kernel: pci_bus 0000:05: resource 0 [io 0x3000-0x3fff] Jan 19 13:03:24.620274 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 19 13:03:24.620487 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 19 13:03:24.620741 kernel: pci_bus 0000:06: resource 0 [io 0x4000-0x4fff] Jan 19 13:03:24.620992 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 19 13:03:24.621221 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 19 13:03:24.621472 kernel: pci_bus 0000:07: resource 0 [io 0x5000-0x5fff] Jan 19 13:03:24.621747 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 19 13:03:24.621973 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 19 13:03:24.622239 kernel: pci_bus 0000:08: resource 0 [io 0x6000-0x6fff] Jan 19 13:03:24.622473 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 19 13:03:24.622695 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 19 13:03:24.622976 kernel: pci_bus 0000:09: resource 0 [io 0x7000-0x7fff] Jan 19 13:03:24.623224 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 19 13:03:24.623447 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 19 13:03:24.623467 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 19 13:03:24.623481 kernel: PCI: CLS 0 bytes, default 64 Jan 19 13:03:24.623512 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 19 13:03:24.623527 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 19 13:03:24.623541 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 19 13:03:24.623566 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 19 13:03:24.623579 kernel: Initialise system trusted keyrings Jan 19 13:03:24.623592 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 19 13:03:24.623605 kernel: Key type asymmetric registered Jan 19 13:03:24.623652 kernel: Asymmetric key parser 'x509' registered Jan 19 13:03:24.623666 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 19 13:03:24.623704 kernel: io scheduler mq-deadline registered Jan 19 13:03:24.623719 kernel: io scheduler kyber registered Jan 19 13:03:24.623732 kernel: io scheduler bfq registered Jan 19 13:03:24.623996 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 19 13:03:24.624236 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 19 13:03:24.624504 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.624766 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 19 13:03:24.625033 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 19 13:03:24.625271 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.625516 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 19 13:03:24.625860 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 19 13:03:24.626103 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.626341 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 19 13:03:24.626595 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 19 13:03:24.626839 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.627099 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 19 13:03:24.627343 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 19 13:03:24.627604 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.627865 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 19 13:03:24.628104 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 19 13:03:24.628355 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.628579 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 19 13:03:24.628884 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 19 13:03:24.629125 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.629351 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 19 13:03:24.629621 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 19 13:03:24.629867 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 19 13:03:24.629888 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 19 13:03:24.629903 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 19 13:03:24.629917 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 19 13:03:24.629931 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 19 13:03:24.629963 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 19 13:03:24.629979 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 19 13:03:24.629993 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 19 13:03:24.630007 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 19 13:03:24.630021 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 19 13:03:24.630306 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 19 13:03:24.630527 kernel: rtc_cmos 00:03: registered as rtc0 Jan 19 13:03:24.630821 kernel: rtc_cmos 00:03: setting system clock to 2026-01-19T13:03:22 UTC (1768827802) Jan 19 13:03:24.631041 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 19 13:03:24.631072 kernel: intel_pstate: CPU model not supported Jan 19 13:03:24.631087 kernel: NET: Registered PF_INET6 protocol family Jan 19 13:03:24.631101 kernel: Segment Routing with IPv6 Jan 19 13:03:24.631114 kernel: In-situ OAM (IOAM) with IPv6 Jan 19 13:03:24.631128 kernel: NET: Registered PF_PACKET protocol family Jan 19 13:03:24.631161 kernel: Key type dns_resolver registered Jan 19 13:03:24.631175 kernel: IPI shorthand broadcast: enabled Jan 19 13:03:24.631189 kernel: sched_clock: Marking stable (2467004165, 221152198)->(2829662534, -141506171) Jan 19 13:03:24.631203 kernel: registered taskstats version 1 Jan 19 13:03:24.631217 kernel: Loading compiled-in X.509 certificates Jan 19 13:03:24.631231 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ba909111c102256a4abe14f4fc03cb5c21d9fa72' Jan 19 13:03:24.631245 kernel: Demotion targets for Node 0: null Jan 19 13:03:24.631273 kernel: Key type .fscrypt registered Jan 19 13:03:24.631287 kernel: Key type fscrypt-provisioning registered Jan 19 13:03:24.631301 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 19 13:03:24.631315 kernel: ima: Allocated hash algorithm: sha1 Jan 19 13:03:24.631329 kernel: ima: No architecture policies found Jan 19 13:03:24.631343 kernel: clk: Disabling unused clocks Jan 19 13:03:24.631372 kernel: Freeing unused kernel image (initmem) memory: 15540K Jan 19 13:03:24.631400 kernel: Write protecting the kernel read-only data: 47104k Jan 19 13:03:24.631415 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 19 13:03:24.631429 kernel: Run /init as init process Jan 19 13:03:24.631443 kernel: with arguments: Jan 19 13:03:24.631457 kernel: /init Jan 19 13:03:24.631470 kernel: with environment: Jan 19 13:03:24.631483 kernel: HOME=/ Jan 19 13:03:24.631496 kernel: TERM=linux Jan 19 13:03:24.631524 kernel: ACPI: bus type USB registered Jan 19 13:03:24.631539 kernel: usbcore: registered new interface driver usbfs Jan 19 13:03:24.631553 kernel: usbcore: registered new interface driver hub Jan 19 13:03:24.631567 kernel: usbcore: registered new device driver usb Jan 19 13:03:24.631824 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 19 13:03:24.632107 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 19 13:03:24.632341 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 19 13:03:24.632595 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 19 13:03:24.632858 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 19 13:03:24.633106 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 19 13:03:24.633443 kernel: hub 1-0:1.0: USB hub found Jan 19 13:03:24.633794 kernel: hub 1-0:1.0: 4 ports detected Jan 19 13:03:24.634093 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 19 13:03:24.634394 kernel: hub 2-0:1.0: USB hub found Jan 19 13:03:24.634641 kernel: hub 2-0:1.0: 4 ports detected Jan 19 13:03:24.634661 kernel: SCSI subsystem initialized Jan 19 13:03:24.634709 kernel: libata version 3.00 loaded. Jan 19 13:03:24.634938 kernel: ahci 0000:00:1f.2: version 3.0 Jan 19 13:03:24.634977 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 19 13:03:24.635245 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 19 13:03:24.635473 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 19 13:03:24.635740 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 19 13:03:24.636044 kernel: scsi host0: ahci Jan 19 13:03:24.636332 kernel: scsi host1: ahci Jan 19 13:03:24.636588 kernel: scsi host2: ahci Jan 19 13:03:24.636889 kernel: scsi host3: ahci Jan 19 13:03:24.637143 kernel: scsi host4: ahci Jan 19 13:03:24.637429 kernel: scsi host5: ahci Jan 19 13:03:24.637452 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 19 13:03:24.637486 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 19 13:03:24.637501 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 19 13:03:24.637515 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 19 13:03:24.637529 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 19 13:03:24.637543 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 19 13:03:24.637847 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 19 13:03:24.637895 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 19 13:03:24.637910 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.637924 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.637938 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.637952 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.637965 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.637979 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 19 13:03:24.638256 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 19 13:03:24.638278 kernel: usbcore: registered new interface driver usbhid Jan 19 13:03:24.638292 kernel: usbhid: USB HID core driver Jan 19 13:03:24.638519 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 19 13:03:24.638539 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 19 13:03:24.638566 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 19 13:03:24.638946 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 19 13:03:24.638974 kernel: GPT:25804799 != 125829119 Jan 19 13:03:24.638988 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 19 13:03:24.639002 kernel: GPT:25804799 != 125829119 Jan 19 13:03:24.639033 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 19 13:03:24.639056 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 19 13:03:24.639099 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 19 13:03:24.639114 kernel: device-mapper: uevent: version 1.0.3 Jan 19 13:03:24.639129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 19 13:03:24.639143 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 19 13:03:24.639157 kernel: raid6: sse2x4 gen() 14178 MB/s Jan 19 13:03:24.639172 kernel: raid6: sse2x2 gen() 9801 MB/s Jan 19 13:03:24.639186 kernel: raid6: sse2x1 gen() 9918 MB/s Jan 19 13:03:24.639213 kernel: raid6: using algorithm sse2x4 gen() 14178 MB/s Jan 19 13:03:24.639229 kernel: raid6: .... xor() 8177 MB/s, rmw enabled Jan 19 13:03:24.639243 kernel: raid6: using ssse3x2 recovery algorithm Jan 19 13:03:24.639257 kernel: xor: automatically using best checksumming function avx Jan 19 13:03:24.639271 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 19 13:03:24.639285 kernel: BTRFS: device fsid 163044fe-e6e3-4007-9021-e65918f0e7ac devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (195) Jan 19 13:03:24.639299 kernel: BTRFS info (device dm-0): first mount of filesystem 163044fe-e6e3-4007-9021-e65918f0e7ac Jan 19 13:03:24.639327 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 19 13:03:24.639342 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 19 13:03:24.639356 kernel: BTRFS info (device dm-0): enabling free space tree Jan 19 13:03:24.639370 kernel: loop: module loaded Jan 19 13:03:24.639383 kernel: loop0: detected capacity change from 0 to 100552 Jan 19 13:03:24.639397 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 19 13:03:24.639413 systemd[1]: Successfully made /usr/ read-only. Jan 19 13:03:24.641213 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 13:03:24.641232 systemd[1]: Detected virtualization kvm. Jan 19 13:03:24.641246 systemd[1]: Detected architecture x86-64. Jan 19 13:03:24.641261 systemd[1]: Running in initrd. Jan 19 13:03:24.641275 systemd[1]: No hostname configured, using default hostname. Jan 19 13:03:24.641291 systemd[1]: Hostname set to . Jan 19 13:03:24.641327 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 13:03:24.641343 systemd[1]: Queued start job for default target initrd.target. Jan 19 13:03:24.641357 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 13:03:24.641373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 13:03:24.641388 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 13:03:24.641404 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 19 13:03:24.641434 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 13:03:24.641457 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 19 13:03:24.641473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 19 13:03:24.641488 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 13:03:24.641503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 13:03:24.641517 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 19 13:03:24.641555 systemd[1]: Reached target paths.target - Path Units. Jan 19 13:03:24.641570 systemd[1]: Reached target slices.target - Slice Units. Jan 19 13:03:24.641585 systemd[1]: Reached target swap.target - Swaps. Jan 19 13:03:24.641600 systemd[1]: Reached target timers.target - Timer Units. Jan 19 13:03:24.641615 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 13:03:24.641630 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 13:03:24.641646 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 13:03:24.641701 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 19 13:03:24.641716 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 19 13:03:24.641743 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 13:03:24.641756 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 13:03:24.641770 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 13:03:24.641784 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 13:03:24.641809 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 19 13:03:24.641836 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 19 13:03:24.641850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 13:03:24.641877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 19 13:03:24.641894 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 19 13:03:24.641908 systemd[1]: Starting systemd-fsck-usr.service... Jan 19 13:03:24.641935 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 13:03:24.641994 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 13:03:24.642014 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 13:03:24.642029 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 19 13:03:24.642054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 13:03:24.642088 systemd[1]: Finished systemd-fsck-usr.service. Jan 19 13:03:24.642105 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 19 13:03:24.642228 systemd-journald[333]: Collecting audit messages is enabled. Jan 19 13:03:24.642280 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 19 13:03:24.642296 kernel: Bridge firewalling registered Jan 19 13:03:24.642311 systemd-journald[333]: Journal started Jan 19 13:03:24.642337 systemd-journald[333]: Runtime Journal (/run/log/journal/91b79a2e7ca748feb3dccf563fd63470) is 4.7M, max 37.7M, 33M free. Jan 19 13:03:24.573568 systemd-modules-load[334]: Inserted module 'br_netfilter' Jan 19 13:03:24.655823 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 13:03:24.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.657418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 13:03:24.667610 kernel: audit: type=1130 audit(1768827804.655:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.667642 kernel: audit: type=1130 audit(1768827804.661:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.662610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 13:03:24.673722 kernel: audit: type=1130 audit(1768827804.667:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.670142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 13:03:24.681071 kernel: audit: type=1130 audit(1768827804.673:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.679861 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 19 13:03:24.684094 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 13:03:24.691445 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 13:03:24.701831 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 13:03:24.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.714980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 13:03:24.722979 kernel: audit: type=1130 audit(1768827804.715:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.723560 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 13:03:24.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.730696 kernel: audit: type=1130 audit(1768827804.724:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.731872 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 13:03:24.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.738696 kernel: audit: type=1130 audit(1768827804.732:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.739890 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 19 13:03:24.741000 audit: BPF prog-id=6 op=LOAD Jan 19 13:03:24.745700 kernel: audit: type=1334 audit(1768827804.741:9): prog-id=6 op=LOAD Jan 19 13:03:24.746349 systemd-tmpfiles[351]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 19 13:03:24.747973 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 13:03:24.762424 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 13:03:24.778003 kernel: audit: type=1130 audit(1768827804.763:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.788898 dracut-cmdline[369]: dracut-109 Jan 19 13:03:24.794854 dracut-cmdline[369]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=b524184fc941b6143829d4e80d1854878d9df1f2d76dbdcda2c58f1abfc5daa1 Jan 19 13:03:24.831713 systemd-resolved[370]: Positive Trust Anchors: Jan 19 13:03:24.831731 systemd-resolved[370]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 13:03:24.831738 systemd-resolved[370]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 13:03:24.831779 systemd-resolved[370]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 13:03:24.879929 systemd-resolved[370]: Defaulting to hostname 'linux'. Jan 19 13:03:24.883058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 13:03:24.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:24.884270 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 13:03:24.932717 kernel: Loading iSCSI transport class v2.0-870. Jan 19 13:03:24.950717 kernel: iscsi: registered transport (tcp) Jan 19 13:03:24.979007 kernel: iscsi: registered transport (qla4xxx) Jan 19 13:03:24.979100 kernel: QLogic iSCSI HBA Driver Jan 19 13:03:25.014469 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 13:03:25.048092 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 13:03:25.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.051907 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 13:03:25.133732 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 19 13:03:25.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.137025 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 19 13:03:25.138831 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 19 13:03:25.178982 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 19 13:03:25.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.180000 audit: BPF prog-id=7 op=LOAD Jan 19 13:03:25.180000 audit: BPF prog-id=8 op=LOAD Jan 19 13:03:25.182875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 13:03:25.220400 systemd-udevd[598]: Using default interface naming scheme 'v257'. Jan 19 13:03:25.237148 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 13:03:25.246658 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 19 13:03:25.247269 kernel: audit: type=1130 audit(1768827805.238:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.248097 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 19 13:03:25.287461 dracut-pre-trigger[673]: rd.md=0: removing MD RAID activation Jan 19 13:03:25.292795 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 13:03:25.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.299709 kernel: audit: type=1130 audit(1768827805.292:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.296251 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 13:03:25.307256 kernel: audit: type=1334 audit(1768827805.294:19): prog-id=9 op=LOAD Jan 19 13:03:25.294000 audit: BPF prog-id=9 op=LOAD Jan 19 13:03:25.338092 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 13:03:25.345221 kernel: audit: type=1130 audit(1768827805.338:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.342878 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 13:03:25.370046 systemd-networkd[716]: lo: Link UP Jan 19 13:03:25.371100 systemd-networkd[716]: lo: Gained carrier Jan 19 13:03:25.372693 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 13:03:25.378778 kernel: audit: type=1130 audit(1768827805.372:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.373547 systemd[1]: Reached target network.target - Network. Jan 19 13:03:25.507586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 13:03:25.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.517175 kernel: audit: type=1130 audit(1768827805.510:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:25.519958 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 19 13:03:25.628412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 19 13:03:25.695987 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 19 13:03:25.709244 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 19 13:03:25.726109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 13:03:25.729893 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 19 13:03:25.760711 disk-uuid[771]: Primary Header is updated. Jan 19 13:03:25.760711 disk-uuid[771]: Secondary Entries is updated. Jan 19 13:03:25.760711 disk-uuid[771]: Secondary Header is updated. Jan 19 13:03:25.874749 kernel: cryptd: max_cpu_qlen set to 1000 Jan 19 13:03:25.937425 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 13:03:25.940468 systemd-networkd[716]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 13:03:25.945906 kernel: AES CTR mode by8 optimization enabled Jan 19 13:03:25.943755 systemd-networkd[716]: eth0: Link UP Jan 19 13:03:25.944227 systemd-networkd[716]: eth0: Gained carrier Jan 19 13:03:25.944245 systemd-networkd[716]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 13:03:25.951115 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 19 13:03:25.978968 systemd-networkd[716]: eth0: DHCPv4 address 10.243.74.46/30, gateway 10.243.74.45 acquired from 10.243.74.45 Jan 19 13:03:26.046540 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 13:03:26.047756 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 13:03:26.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.051415 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 13:03:26.057379 kernel: audit: type=1131 audit(1768827806.050:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.059506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 13:03:26.060767 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 19 13:03:26.071307 kernel: audit: type=1130 audit(1768827806.061:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.063636 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 13:03:26.070812 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 13:03:26.073658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 13:03:26.078854 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 19 13:03:26.156571 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 13:03:26.163537 kernel: audit: type=1130 audit(1768827806.157:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.179577 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 19 13:03:26.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.191693 kernel: audit: type=1130 audit(1768827806.185:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.925072 disk-uuid[772]: Warning: The kernel is still using the old partition table. Jan 19 13:03:26.925072 disk-uuid[772]: The new table will be used at the next reboot or after you Jan 19 13:03:26.925072 disk-uuid[772]: run partprobe(8) or kpartx(8) Jan 19 13:03:26.925072 disk-uuid[772]: The operation has completed successfully. Jan 19 13:03:26.932293 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 19 13:03:26.932492 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 19 13:03:26.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.935413 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 19 13:03:26.975694 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (855) Jan 19 13:03:26.979212 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 13:03:26.979256 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 13:03:26.986533 kernel: BTRFS info (device vda6): turning on async discard Jan 19 13:03:26.986581 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 13:03:26.995709 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 13:03:26.996639 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 19 13:03:26.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:26.999711 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 19 13:03:27.364474 ignition[874]: Ignition 2.24.0 Jan 19 13:03:27.364502 ignition[874]: Stage: fetch-offline Jan 19 13:03:27.364714 ignition[874]: no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:27.364771 ignition[874]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:27.370258 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 13:03:27.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:27.367214 ignition[874]: parsed url from cmdline: "" Jan 19 13:03:27.367223 ignition[874]: no config URL provided Jan 19 13:03:27.367236 ignition[874]: reading system config file "/usr/lib/ignition/user.ign" Jan 19 13:03:27.367262 ignition[874]: no config at "/usr/lib/ignition/user.ign" Jan 19 13:03:27.367272 ignition[874]: failed to fetch config: resource requires networking Jan 19 13:03:27.374883 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 19 13:03:27.367688 ignition[874]: Ignition finished successfully Jan 19 13:03:27.429493 ignition[882]: Ignition 2.24.0 Jan 19 13:03:27.429525 ignition[882]: Stage: fetch Jan 19 13:03:27.429791 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:27.429809 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:27.430163 ignition[882]: parsed url from cmdline: "" Jan 19 13:03:27.430170 ignition[882]: no config URL provided Jan 19 13:03:27.430180 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 19 13:03:27.430193 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 19 13:03:27.431006 ignition[882]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 19 13:03:27.431096 ignition[882]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 19 13:03:27.435078 ignition[882]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 19 13:03:27.457031 ignition[882]: GET result: OK Jan 19 13:03:27.457429 ignition[882]: parsing config with SHA512: f20bbc8f098f7edbbbc836693aa9aed0a164190f0c2fa64b8c0fa4609afb527e37f92e3843413203b6f53d76ae6777013e84feb521da2f4fcb85e90588a1a321 Jan 19 13:03:27.465068 unknown[882]: fetched base config from "system" Jan 19 13:03:27.465085 unknown[882]: fetched base config from "system" Jan 19 13:03:27.465452 ignition[882]: fetch: fetch complete Jan 19 13:03:27.465095 unknown[882]: fetched user config from "openstack" Jan 19 13:03:27.465460 ignition[882]: fetch: fetch passed Jan 19 13:03:27.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:27.468171 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 19 13:03:27.465532 ignition[882]: Ignition finished successfully Jan 19 13:03:27.471881 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 19 13:03:27.502499 ignition[888]: Ignition 2.24.0 Jan 19 13:03:27.502543 ignition[888]: Stage: kargs Jan 19 13:03:27.502776 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:27.505421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 19 13:03:27.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:27.502794 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:27.503724 ignition[888]: kargs: kargs passed Jan 19 13:03:27.503796 ignition[888]: Ignition finished successfully Jan 19 13:03:27.509660 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 19 13:03:27.515807 systemd-networkd[716]: eth0: Gained IPv6LL Jan 19 13:03:27.532595 ignition[894]: Ignition 2.24.0 Jan 19 13:03:27.532620 ignition[894]: Stage: disks Jan 19 13:03:27.532832 ignition[894]: no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:27.532850 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:27.533748 ignition[894]: disks: disks passed Jan 19 13:03:27.537564 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 19 13:03:27.533831 ignition[894]: Ignition finished successfully Jan 19 13:03:27.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:27.540482 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 19 13:03:27.541625 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 19 13:03:27.543260 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 13:03:27.544715 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 13:03:27.546022 systemd[1]: Reached target basic.target - Basic System. Jan 19 13:03:27.549024 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 19 13:03:27.608016 systemd-fsck[902]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 19 13:03:27.612047 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 19 13:03:27.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:27.616249 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 19 13:03:27.749693 kernel: EXT4-fs (vda9): mounted filesystem 94229029-29b7-42b8-a135-4530ccb5ed34 r/w with ordered data mode. Quota mode: none. Jan 19 13:03:27.750521 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 19 13:03:27.751989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 19 13:03:27.755327 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 13:03:27.757328 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 19 13:03:27.759945 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 19 13:03:27.762860 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 19 13:03:27.766586 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 19 13:03:27.766638 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 13:03:27.774575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 19 13:03:27.777866 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 19 13:03:27.794770 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Jan 19 13:03:27.801650 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 13:03:27.801753 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 13:03:27.832481 kernel: BTRFS info (device vda6): turning on async discard Jan 19 13:03:27.832594 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 13:03:27.835267 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 13:03:27.876687 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:28.030748 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 19 13:03:28.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:28.035101 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 19 13:03:28.037913 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 19 13:03:28.071936 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 19 13:03:28.073405 kernel: BTRFS info (device vda6): last unmount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 13:03:28.096554 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 19 13:03:28.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:28.142836 ignition[1012]: INFO : Ignition 2.24.0 Jan 19 13:03:28.144496 ignition[1012]: INFO : Stage: mount Jan 19 13:03:28.145738 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:28.145738 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:28.154463 ignition[1012]: INFO : mount: mount passed Jan 19 13:03:28.155575 ignition[1012]: INFO : Ignition finished successfully Jan 19 13:03:28.158782 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 19 13:03:28.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:28.914696 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:29.006141 systemd-networkd[716]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d28b:24:19ff:fef3:4a2e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d28b:24:19ff:fef3:4a2e/64 assigned by NDisc. Jan 19 13:03:29.007882 systemd-networkd[716]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 19 13:03:30.936710 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:34.944721 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:34.955523 coreos-metadata[912]: Jan 19 13:03:34.955 WARN failed to locate config-drive, using the metadata service API instead Jan 19 13:03:34.984613 coreos-metadata[912]: Jan 19 13:03:34.984 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 19 13:03:35.003453 coreos-metadata[912]: Jan 19 13:03:35.003 INFO Fetch successful Jan 19 13:03:35.004898 coreos-metadata[912]: Jan 19 13:03:35.004 INFO wrote hostname srv-hsmf0.gb1.brightbox.com to /sysroot/etc/hostname Jan 19 13:03:35.007377 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 19 13:03:35.007641 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 19 13:03:35.022657 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 19 13:03:35.022713 kernel: audit: type=1130 audit(1768827815.010:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:35.022755 kernel: audit: type=1131 audit(1768827815.010:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:35.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:35.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:35.013860 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 19 13:03:35.045475 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 19 13:03:35.072717 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 19 13:03:35.076703 kernel: BTRFS info (device vda6): first mount of filesystem b6ab243a-3f7a-4aec-9347-0a1cbc843af6 Jan 19 13:03:35.080692 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 19 13:03:35.086173 kernel: BTRFS info (device vda6): turning on async discard Jan 19 13:03:35.086245 kernel: BTRFS info (device vda6): enabling free space tree Jan 19 13:03:35.089854 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 19 13:03:35.131087 ignition[1047]: INFO : Ignition 2.24.0 Jan 19 13:03:35.131087 ignition[1047]: INFO : Stage: files Jan 19 13:03:35.132973 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:35.132973 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:35.134641 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 19 13:03:35.136969 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 19 13:03:35.136969 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 19 13:03:35.142739 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 19 13:03:35.144030 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 19 13:03:35.145518 unknown[1047]: wrote ssh authorized keys file for user: core Jan 19 13:03:35.146555 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 19 13:03:35.148660 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 19 13:03:35.150168 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 19 13:03:35.344696 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 19 13:03:35.623552 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 13:03:35.625130 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 19 13:03:35.636935 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 19 13:03:35.954970 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 19 13:03:37.102511 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 19 13:03:37.102511 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 19 13:03:37.107655 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 13:03:37.110423 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 19 13:03:37.110423 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 19 13:03:37.112788 ignition[1047]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 19 13:03:37.112788 ignition[1047]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 19 13:03:37.112788 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 19 13:03:37.112788 ignition[1047]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 19 13:03:37.112788 ignition[1047]: INFO : files: files passed Jan 19 13:03:37.112788 ignition[1047]: INFO : Ignition finished successfully Jan 19 13:03:37.126099 kernel: audit: type=1130 audit(1768827817.116:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.115522 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 19 13:03:37.120348 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 19 13:03:37.130296 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 19 13:03:37.141174 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 19 13:03:37.148001 kernel: audit: type=1130 audit(1768827817.141:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.141334 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 19 13:03:37.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.154690 kernel: audit: type=1131 audit(1768827817.141:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.164870 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 13:03:37.166464 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 19 13:03:37.167944 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 19 13:03:37.169950 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 13:03:37.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.171852 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 19 13:03:37.178421 kernel: audit: type=1130 audit(1768827817.170:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.180036 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 19 13:03:37.238874 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 19 13:03:37.239072 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 19 13:03:37.251013 kernel: audit: type=1130 audit(1768827817.239:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.251048 kernel: audit: type=1131 audit(1768827817.239:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.240867 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 19 13:03:37.251814 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 19 13:03:37.253901 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 19 13:03:37.256894 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 19 13:03:37.306400 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 13:03:37.312817 kernel: audit: type=1130 audit(1768827817.306:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.310863 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 19 13:03:37.345365 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 19 13:03:37.345820 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 19 13:03:37.347590 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 13:03:37.349182 systemd[1]: Stopped target timers.target - Timer Units. Jan 19 13:03:37.350676 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 19 13:03:37.357258 kernel: audit: type=1131 audit(1768827817.351:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.351000 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 19 13:03:37.357092 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 19 13:03:37.358028 systemd[1]: Stopped target basic.target - Basic System. Jan 19 13:03:37.359524 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 19 13:03:37.361865 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 19 13:03:37.363266 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 19 13:03:37.364837 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 19 13:03:37.366439 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 19 13:03:37.367838 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 19 13:03:37.369523 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 19 13:03:37.370920 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 19 13:03:37.372459 systemd[1]: Stopped target swap.target - Swaps. Jan 19 13:03:37.373719 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 19 13:03:37.374046 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 19 13:03:37.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.377362 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 19 13:03:37.378231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 13:03:37.380910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 19 13:03:37.381128 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 13:03:37.382734 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 19 13:03:37.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.382985 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 19 13:03:37.384903 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 19 13:03:37.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.385202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 19 13:03:37.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.386935 systemd[1]: ignition-files.service: Deactivated successfully. Jan 19 13:03:37.387096 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 19 13:03:37.390970 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 19 13:03:37.395060 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 19 13:03:37.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.395763 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 19 13:03:37.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.396059 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 13:03:37.398269 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 19 13:03:37.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.398557 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 13:03:37.401250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 19 13:03:37.401512 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 19 13:03:37.415540 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 19 13:03:37.416768 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 19 13:03:37.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.431165 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 19 13:03:37.442125 ignition[1102]: INFO : Ignition 2.24.0 Jan 19 13:03:37.442125 ignition[1102]: INFO : Stage: umount Jan 19 13:03:37.443995 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 19 13:03:37.443995 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 19 13:03:37.443995 ignition[1102]: INFO : umount: umount passed Jan 19 13:03:37.443995 ignition[1102]: INFO : Ignition finished successfully Jan 19 13:03:37.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.445026 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 19 13:03:37.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.445222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 19 13:03:37.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.447091 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 19 13:03:37.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.447277 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 19 13:03:37.448548 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 19 13:03:37.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.448621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 19 13:03:37.449872 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 19 13:03:37.449946 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 19 13:03:37.451189 systemd[1]: Stopped target network.target - Network. Jan 19 13:03:37.452371 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 19 13:03:37.452449 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 19 13:03:37.453819 systemd[1]: Stopped target paths.target - Path Units. Jan 19 13:03:37.454933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 19 13:03:37.458795 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 13:03:37.460239 systemd[1]: Stopped target slices.target - Slice Units. Jan 19 13:03:37.461556 systemd[1]: Stopped target sockets.target - Socket Units. Jan 19 13:03:37.463056 systemd[1]: iscsid.socket: Deactivated successfully. Jan 19 13:03:37.463138 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 19 13:03:37.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.464734 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 19 13:03:37.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.464815 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 19 13:03:37.466056 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 19 13:03:37.466108 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 19 13:03:37.467360 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 19 13:03:37.467442 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 19 13:03:37.468658 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 19 13:03:37.468804 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 19 13:03:37.470222 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 19 13:03:37.472071 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 19 13:03:37.483183 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 19 13:03:37.483490 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 19 13:03:37.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.486000 audit: BPF prog-id=6 op=UNLOAD Jan 19 13:03:37.487127 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 19 13:03:37.487348 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 19 13:03:37.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.494000 audit: BPF prog-id=9 op=UNLOAD Jan 19 13:03:37.495408 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 19 13:03:37.496251 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 19 13:03:37.496339 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 19 13:03:37.499179 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 19 13:03:37.501092 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 19 13:03:37.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.501189 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 19 13:03:37.503168 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 19 13:03:37.503244 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 19 13:03:37.503957 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 19 13:03:37.504026 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 19 13:03:37.508100 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 13:03:37.517463 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 19 13:03:37.517723 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 13:03:37.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.521821 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 19 13:03:37.522791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 19 13:03:37.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.523758 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 19 13:03:37.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.523820 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 13:03:37.525139 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 19 13:03:37.525275 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 19 13:03:37.526209 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 19 13:03:37.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.526285 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 19 13:03:37.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.527627 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 19 13:03:37.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.527963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 19 13:03:37.531115 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 19 13:03:37.532317 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 19 13:03:37.532403 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 13:03:37.536882 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 19 13:03:37.537013 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 13:03:37.538354 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 19 13:03:37.538426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 13:03:37.540160 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 19 13:03:37.540249 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 13:03:37.540999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 19 13:03:37.541068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 13:03:37.559830 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 19 13:03:37.560106 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 19 13:03:37.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.570239 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 19 13:03:37.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.570412 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 19 13:03:37.575492 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 19 13:03:37.575711 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 19 13:03:37.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.577536 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 19 13:03:37.578529 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 19 13:03:37.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:37.578626 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 19 13:03:37.581415 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 19 13:03:37.609249 systemd[1]: Switching root. Jan 19 13:03:37.658817 systemd-journald[333]: Received SIGTERM from PID 1 (systemd). Jan 19 13:03:37.658916 systemd-journald[333]: Journal stopped Jan 19 13:03:39.502914 kernel: SELinux: policy capability network_peer_controls=1 Jan 19 13:03:39.503130 kernel: SELinux: policy capability open_perms=1 Jan 19 13:03:39.503193 kernel: SELinux: policy capability extended_socket_class=1 Jan 19 13:03:39.503228 kernel: SELinux: policy capability always_check_network=0 Jan 19 13:03:39.503281 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 19 13:03:39.503330 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 19 13:03:39.503371 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 19 13:03:39.503402 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 19 13:03:39.503434 kernel: SELinux: policy capability userspace_initial_context=0 Jan 19 13:03:39.503457 systemd[1]: Successfully loaded SELinux policy in 81.791ms. Jan 19 13:03:39.503532 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.872ms. Jan 19 13:03:39.503576 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 19 13:03:39.503612 systemd[1]: Detected virtualization kvm. Jan 19 13:03:39.503648 systemd[1]: Detected architecture x86-64. Jan 19 13:03:39.504441 systemd[1]: Detected first boot. Jan 19 13:03:39.504495 systemd[1]: Hostname set to . Jan 19 13:03:39.504519 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 19 13:03:39.504567 zram_generator::config[1146]: No configuration found. Jan 19 13:03:39.504624 kernel: Guest personality initialized and is inactive Jan 19 13:03:39.504660 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 19 13:03:39.504740 kernel: Initialized host personality Jan 19 13:03:39.504770 kernel: NET: Registered PF_VSOCK protocol family Jan 19 13:03:39.504799 systemd[1]: Populated /etc with preset unit settings. Jan 19 13:03:39.504829 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 19 13:03:39.504888 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 19 13:03:39.504913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 19 13:03:39.504962 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 19 13:03:39.504996 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 19 13:03:39.505033 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 19 13:03:39.505065 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 19 13:03:39.505098 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 19 13:03:39.505150 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 19 13:03:39.505183 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 19 13:03:39.505206 systemd[1]: Created slice user.slice - User and Session Slice. Jan 19 13:03:39.505235 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 19 13:03:39.505274 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 19 13:03:39.505297 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 19 13:03:39.505330 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 19 13:03:39.505381 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 19 13:03:39.505411 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 19 13:03:39.505441 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 19 13:03:39.505463 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 19 13:03:39.505493 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 19 13:03:39.505535 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 19 13:03:39.505567 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 19 13:03:39.505589 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 19 13:03:39.505618 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 19 13:03:39.505640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 19 13:03:39.506087 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 19 13:03:39.506131 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 19 13:03:39.506183 systemd[1]: Reached target slices.target - Slice Units. Jan 19 13:03:39.506224 systemd[1]: Reached target swap.target - Swaps. Jan 19 13:03:39.506254 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 19 13:03:39.506287 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 19 13:03:39.506311 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 19 13:03:39.506332 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 19 13:03:39.506353 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 19 13:03:39.506405 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 19 13:03:39.506443 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 19 13:03:39.506474 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 19 13:03:39.506507 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 19 13:03:39.506537 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 19 13:03:39.506559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 19 13:03:39.506597 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 19 13:03:39.506638 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 19 13:03:39.506717 systemd[1]: Mounting media.mount - External Media Directory... Jan 19 13:03:39.506754 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:39.506779 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 19 13:03:39.506809 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 19 13:03:39.506840 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 19 13:03:39.506873 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 19 13:03:39.506919 systemd[1]: Reached target machines.target - Containers. Jan 19 13:03:39.506950 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 19 13:03:39.506974 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 13:03:39.507005 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 19 13:03:39.507029 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 19 13:03:39.507059 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 13:03:39.507081 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 13:03:39.507118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 13:03:39.507155 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 19 13:03:39.507186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 13:03:39.507209 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 19 13:03:39.507239 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 19 13:03:39.507269 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 19 13:03:39.507307 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 19 13:03:39.507331 systemd[1]: Stopped systemd-fsck-usr.service. Jan 19 13:03:39.507353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 13:03:39.507381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 19 13:03:39.507420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 19 13:03:39.507444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 19 13:03:39.507475 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 19 13:03:39.507518 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 19 13:03:39.507542 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 19 13:03:39.507575 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:39.507598 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 19 13:03:39.507635 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 19 13:03:39.507658 systemd[1]: Mounted media.mount - External Media Directory. Jan 19 13:03:39.508570 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 19 13:03:39.508600 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 19 13:03:39.508639 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 19 13:03:39.508732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 19 13:03:39.508769 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 19 13:03:39.508803 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 19 13:03:39.508834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 13:03:39.508864 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 13:03:39.508887 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 13:03:39.508925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 13:03:39.508956 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 13:03:39.508979 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 13:03:39.509008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 19 13:03:39.509038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 19 13:03:39.509061 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 19 13:03:39.509082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 13:03:39.509127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 19 13:03:39.509158 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 19 13:03:39.509188 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 19 13:03:39.509222 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 19 13:03:39.509244 kernel: ACPI: bus type drm_connector registered Jan 19 13:03:39.509291 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 19 13:03:39.509324 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 19 13:03:39.509362 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 19 13:03:39.509393 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 19 13:03:39.509424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 13:03:39.509447 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 13:03:39.509469 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 19 13:03:39.509490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 13:03:39.509512 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 19 13:03:39.509610 systemd-journald[1232]: Collecting audit messages is enabled. Jan 19 13:03:39.509710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 19 13:03:39.509738 kernel: fuse: init (API version 7.41) Jan 19 13:03:39.509759 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 19 13:03:39.509782 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 13:03:39.509819 systemd-journald[1232]: Journal started Jan 19 13:03:39.509852 systemd-journald[1232]: Runtime Journal (/run/log/journal/91b79a2e7ca748feb3dccf563fd63470) is 4.7M, max 37.7M, 33M free. Jan 19 13:03:39.066000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 19 13:03:39.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.250000 audit: BPF prog-id=14 op=UNLOAD Jan 19 13:03:39.250000 audit: BPF prog-id=13 op=UNLOAD Jan 19 13:03:39.251000 audit: BPF prog-id=15 op=LOAD Jan 19 13:03:39.255000 audit: BPF prog-id=16 op=LOAD Jan 19 13:03:39.256000 audit: BPF prog-id=17 op=LOAD Jan 19 13:03:39.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.515565 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 13:03:39.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.496000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 19 13:03:39.496000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcfc4152a0 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:03:39.496000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 19 13:03:39.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:38.953111 systemd[1]: Queued start job for default target multi-user.target. Jan 19 13:03:38.970553 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 19 13:03:39.518981 systemd[1]: Started systemd-journald.service - Journal Service. Jan 19 13:03:39.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:38.971617 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 19 13:03:39.520136 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 19 13:03:39.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.520431 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 19 13:03:39.523026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 19 13:03:39.552566 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 19 13:03:39.552593 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jan 19 13:03:39.623146 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 19 13:03:39.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.626812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 19 13:03:39.637734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 19 13:03:39.664722 kernel: loop1: detected capacity change from 0 to 8 Jan 19 13:03:39.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.653172 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 19 13:03:39.654299 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 19 13:03:39.667305 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 19 13:03:39.668890 systemd-journald[1232]: Time spent on flushing to /var/log/journal/91b79a2e7ca748feb3dccf563fd63470 is 102.107ms for 1308 entries. Jan 19 13:03:39.668890 systemd-journald[1232]: System Journal (/var/log/journal/91b79a2e7ca748feb3dccf563fd63470) is 8M, max 588.1M, 580.1M free. Jan 19 13:03:39.792014 systemd-journald[1232]: Received client request to flush runtime journal. Jan 19 13:03:39.792096 kernel: loop2: detected capacity change from 0 to 111560 Jan 19 13:03:39.792127 kernel: loop3: detected capacity change from 0 to 224512 Jan 19 13:03:39.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.767788 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 19 13:03:39.790900 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 19 13:03:39.796000 audit: BPF prog-id=18 op=LOAD Jan 19 13:03:39.796000 audit: BPF prog-id=19 op=LOAD Jan 19 13:03:39.796000 audit: BPF prog-id=20 op=LOAD Jan 19 13:03:39.798998 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 19 13:03:39.799735 kernel: loop4: detected capacity change from 0 to 50784 Jan 19 13:03:39.803000 audit: BPF prog-id=21 op=LOAD Jan 19 13:03:39.805979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 19 13:03:39.809952 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 19 13:03:39.812606 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 19 13:03:39.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.814431 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 19 13:03:39.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.834000 audit: BPF prog-id=22 op=LOAD Jan 19 13:03:39.834000 audit: BPF prog-id=23 op=LOAD Jan 19 13:03:39.834000 audit: BPF prog-id=24 op=LOAD Jan 19 13:03:39.837995 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 19 13:03:39.839000 audit: BPF prog-id=25 op=LOAD Jan 19 13:03:39.839000 audit: BPF prog-id=26 op=LOAD Jan 19 13:03:39.839000 audit: BPF prog-id=27 op=LOAD Jan 19 13:03:39.842161 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 19 13:03:39.847725 kernel: loop5: detected capacity change from 0 to 8 Jan 19 13:03:39.861718 kernel: loop6: detected capacity change from 0 to 111560 Jan 19 13:03:39.900715 kernel: loop7: detected capacity change from 0 to 224512 Jan 19 13:03:39.901266 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jan 19 13:03:39.901298 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jan 19 13:03:39.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.924971 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 19 13:03:39.928947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 19 13:03:39.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:39.943787 kernel: loop1: detected capacity change from 0 to 50784 Jan 19 13:03:39.967353 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 19 13:03:39.974043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 19 13:03:39.977491 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 19 13:03:39.982794 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 19 13:03:39.986530 systemd-nsresourced[1309]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 19 13:03:39.988744 (sd-merge)[1310]: Merged extensions into '/usr'. Jan 19 13:03:40.029464 kernel: kauditd_printk_skb: 102 callbacks suppressed Jan 19 13:03:40.029589 kernel: audit: type=1130 audit(1768827820.027:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.027553 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 19 13:03:40.028698 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 19 13:03:40.034354 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 19 13:03:40.048964 systemd[1]: Reload requested from client PID 1267 ('systemd-sysext') (unit systemd-sysext.service)... Jan 19 13:03:40.049054 systemd[1]: Reloading... Jan 19 13:03:40.210016 systemd-resolved[1303]: Positive Trust Anchors: Jan 19 13:03:40.210037 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 19 13:03:40.210045 systemd-resolved[1303]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 19 13:03:40.210113 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 19 13:03:40.220022 systemd-oomd[1302]: No swap; memory pressure usage will be degraded Jan 19 13:03:40.240583 systemd-resolved[1303]: Using system hostname 'srv-hsmf0.gb1.brightbox.com'. Jan 19 13:03:40.251747 zram_generator::config[1361]: No configuration found. Jan 19 13:03:40.566281 systemd[1]: Reloading finished in 516 ms. Jan 19 13:03:40.588449 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 19 13:03:40.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.589982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 19 13:03:40.594341 kernel: audit: type=1130 audit(1768827820.588:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.595075 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 19 13:03:40.595703 kernel: audit: type=1130 audit(1768827820.593:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.605475 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 19 13:03:40.606002 kernel: audit: type=1130 audit(1768827820.599:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:40.613913 systemd[1]: Starting ensure-sysext.service... Jan 19 13:03:40.617010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 19 13:03:40.628770 kernel: audit: type=1334 audit(1768827820.624:152): prog-id=28 op=LOAD Jan 19 13:03:40.628845 kernel: audit: type=1334 audit(1768827820.624:153): prog-id=21 op=UNLOAD Jan 19 13:03:40.624000 audit: BPF prog-id=28 op=LOAD Jan 19 13:03:40.624000 audit: BPF prog-id=21 op=UNLOAD Jan 19 13:03:40.625000 audit: BPF prog-id=29 op=LOAD Jan 19 13:03:40.632697 kernel: audit: type=1334 audit(1768827820.625:154): prog-id=29 op=LOAD Jan 19 13:03:40.632754 kernel: audit: type=1334 audit(1768827820.625:155): prog-id=18 op=UNLOAD Jan 19 13:03:40.625000 audit: BPF prog-id=18 op=UNLOAD Jan 19 13:03:40.636369 kernel: audit: type=1334 audit(1768827820.625:156): prog-id=30 op=LOAD Jan 19 13:03:40.636424 kernel: audit: type=1334 audit(1768827820.625:157): prog-id=31 op=LOAD Jan 19 13:03:40.625000 audit: BPF prog-id=30 op=LOAD Jan 19 13:03:40.625000 audit: BPF prog-id=31 op=LOAD Jan 19 13:03:40.625000 audit: BPF prog-id=19 op=UNLOAD Jan 19 13:03:40.625000 audit: BPF prog-id=20 op=UNLOAD Jan 19 13:03:40.626000 audit: BPF prog-id=32 op=LOAD Jan 19 13:03:40.626000 audit: BPF prog-id=25 op=UNLOAD Jan 19 13:03:40.626000 audit: BPF prog-id=33 op=LOAD Jan 19 13:03:40.626000 audit: BPF prog-id=34 op=LOAD Jan 19 13:03:40.626000 audit: BPF prog-id=26 op=UNLOAD Jan 19 13:03:40.626000 audit: BPF prog-id=27 op=UNLOAD Jan 19 13:03:40.627000 audit: BPF prog-id=35 op=LOAD Jan 19 13:03:40.627000 audit: BPF prog-id=22 op=UNLOAD Jan 19 13:03:40.627000 audit: BPF prog-id=36 op=LOAD Jan 19 13:03:40.627000 audit: BPF prog-id=37 op=LOAD Jan 19 13:03:40.627000 audit: BPF prog-id=23 op=UNLOAD Jan 19 13:03:40.627000 audit: BPF prog-id=24 op=UNLOAD Jan 19 13:03:40.630000 audit: BPF prog-id=38 op=LOAD Jan 19 13:03:40.630000 audit: BPF prog-id=15 op=UNLOAD Jan 19 13:03:40.630000 audit: BPF prog-id=39 op=LOAD Jan 19 13:03:40.630000 audit: BPF prog-id=40 op=LOAD Jan 19 13:03:40.630000 audit: BPF prog-id=16 op=UNLOAD Jan 19 13:03:40.630000 audit: BPF prog-id=17 op=UNLOAD Jan 19 13:03:40.661256 systemd[1]: Reload requested from client PID 1416 ('systemctl') (unit ensure-sysext.service)... Jan 19 13:03:40.661304 systemd[1]: Reloading... Jan 19 13:03:40.694515 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 19 13:03:40.696545 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 19 13:03:40.697596 systemd-tmpfiles[1417]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 19 13:03:40.704517 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Jan 19 13:03:40.704802 systemd-tmpfiles[1417]: ACLs are not supported, ignoring. Jan 19 13:03:40.723298 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 13:03:40.723318 systemd-tmpfiles[1417]: Skipping /boot Jan 19 13:03:40.780601 systemd-tmpfiles[1417]: Detected autofs mount point /boot during canonicalization of boot. Jan 19 13:03:40.780624 systemd-tmpfiles[1417]: Skipping /boot Jan 19 13:03:40.845711 zram_generator::config[1449]: No configuration found. Jan 19 13:03:41.137354 systemd[1]: Reloading finished in 475 ms. Jan 19 13:03:41.170860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 19 13:03:41.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.173000 audit: BPF prog-id=41 op=LOAD Jan 19 13:03:41.173000 audit: BPF prog-id=32 op=UNLOAD Jan 19 13:03:41.174000 audit: BPF prog-id=42 op=LOAD Jan 19 13:03:41.174000 audit: BPF prog-id=43 op=LOAD Jan 19 13:03:41.174000 audit: BPF prog-id=33 op=UNLOAD Jan 19 13:03:41.174000 audit: BPF prog-id=34 op=UNLOAD Jan 19 13:03:41.175000 audit: BPF prog-id=44 op=LOAD Jan 19 13:03:41.175000 audit: BPF prog-id=35 op=UNLOAD Jan 19 13:03:41.175000 audit: BPF prog-id=45 op=LOAD Jan 19 13:03:41.175000 audit: BPF prog-id=46 op=LOAD Jan 19 13:03:41.175000 audit: BPF prog-id=36 op=UNLOAD Jan 19 13:03:41.175000 audit: BPF prog-id=37 op=UNLOAD Jan 19 13:03:41.178000 audit: BPF prog-id=47 op=LOAD Jan 19 13:03:41.178000 audit: BPF prog-id=28 op=UNLOAD Jan 19 13:03:41.180000 audit: BPF prog-id=48 op=LOAD Jan 19 13:03:41.184000 audit: BPF prog-id=29 op=UNLOAD Jan 19 13:03:41.184000 audit: BPF prog-id=49 op=LOAD Jan 19 13:03:41.184000 audit: BPF prog-id=50 op=LOAD Jan 19 13:03:41.184000 audit: BPF prog-id=30 op=UNLOAD Jan 19 13:03:41.184000 audit: BPF prog-id=31 op=UNLOAD Jan 19 13:03:41.185000 audit: BPF prog-id=51 op=LOAD Jan 19 13:03:41.185000 audit: BPF prog-id=38 op=UNLOAD Jan 19 13:03:41.185000 audit: BPF prog-id=52 op=LOAD Jan 19 13:03:41.185000 audit: BPF prog-id=53 op=LOAD Jan 19 13:03:41.185000 audit: BPF prog-id=39 op=UNLOAD Jan 19 13:03:41.185000 audit: BPF prog-id=40 op=UNLOAD Jan 19 13:03:41.190921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 19 13:03:41.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.204102 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 13:03:41.207036 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 19 13:03:41.214186 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 19 13:03:41.219112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 19 13:03:41.220000 audit: BPF prog-id=8 op=UNLOAD Jan 19 13:03:41.220000 audit: BPF prog-id=7 op=UNLOAD Jan 19 13:03:41.221000 audit: BPF prog-id=54 op=LOAD Jan 19 13:03:41.221000 audit: BPF prog-id=55 op=LOAD Jan 19 13:03:41.228946 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 19 13:03:41.236606 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 19 13:03:41.244457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.245816 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 13:03:41.248849 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 19 13:03:41.259153 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 19 13:03:41.269261 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 19 13:03:41.270189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 13:03:41.270453 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 13:03:41.270591 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 13:03:41.270840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.282547 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.282877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 13:03:41.283136 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 13:03:41.283380 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 13:03:41.283515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 13:03:41.283644 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.313214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.313538 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 19 13:03:41.318733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 19 13:03:41.320334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 19 13:03:41.320635 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 19 13:03:41.327898 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 19 13:03:41.328090 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 19 13:03:41.329584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 19 13:03:41.336590 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 19 13:03:41.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.340263 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 19 13:03:41.340604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 19 13:03:41.340000 audit[1515]: SYSTEM_BOOT pid=1515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.342579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 19 13:03:41.343275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 19 13:03:41.345143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 19 13:03:41.345590 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 19 13:03:41.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.361236 systemd[1]: Finished ensure-sysext.service. Jan 19 13:03:41.368411 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 19 13:03:41.368632 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 19 13:03:41.371000 audit: BPF prog-id=56 op=LOAD Jan 19 13:03:41.373932 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 19 13:03:41.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.380765 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 19 13:03:41.401097 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 19 13:03:41.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:41.424234 systemd-udevd[1511]: Using default interface naming scheme 'v257'. Jan 19 13:03:41.429000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 19 13:03:41.429000 audit[1544]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe4f0cc770 a2=420 a3=0 items=0 ppid=1507 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:03:41.429000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 13:03:41.432208 augenrules[1544]: No rules Jan 19 13:03:41.434091 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 13:03:41.434529 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 13:03:41.494459 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 19 13:03:41.496190 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 19 13:03:41.503565 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 19 13:03:41.511904 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 19 13:03:41.539936 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 19 13:03:41.541373 systemd[1]: Reached target time-set.target - System Time Set. Jan 19 13:03:41.694107 systemd-networkd[1557]: lo: Link UP Jan 19 13:03:41.694623 systemd-networkd[1557]: lo: Gained carrier Jan 19 13:03:41.700569 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 19 13:03:41.701494 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 19 13:03:41.701575 systemd[1]: Reached target network.target - Network. Jan 19 13:03:41.708334 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 19 13:03:41.713781 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 19 13:03:41.801901 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 19 13:03:41.916522 systemd-networkd[1557]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 13:03:41.917098 systemd-networkd[1557]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 19 13:03:41.922216 systemd-networkd[1557]: eth0: Link UP Jan 19 13:03:41.922520 systemd-networkd[1557]: eth0: Gained carrier Jan 19 13:03:41.922543 systemd-networkd[1557]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 19 13:03:41.970756 systemd-networkd[1557]: eth0: DHCPv4 address 10.243.74.46/30, gateway 10.243.74.45 acquired from 10.243.74.45 Jan 19 13:03:41.974029 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Jan 19 13:03:42.117488 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 19 13:03:42.122982 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 19 13:03:42.176113 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 19 13:03:42.252707 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 19 13:03:42.259696 kernel: ACPI: button: Power Button [PWRF] Jan 19 13:03:42.267694 kernel: mousedev: PS/2 mouse device common for all mice Jan 19 13:03:42.312309 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 19 13:03:42.324438 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 19 13:03:42.430782 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 19 13:03:42.437433 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 19 13:03:42.445953 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 19 13:03:42.476776 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 19 13:03:42.479797 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 19 13:03:42.693904 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 19 13:03:42.700646 systemd[1]: Reached target sysinit.target - System Initialization. Jan 19 13:03:42.702506 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 19 13:03:42.703960 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 19 13:03:42.705136 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 19 13:03:42.706890 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 19 13:03:42.713887 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 19 13:03:42.715970 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 19 13:03:42.717135 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 19 13:03:42.718038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 19 13:03:42.718907 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 19 13:03:42.719079 systemd[1]: Reached target paths.target - Path Units. Jan 19 13:03:42.719813 systemd[1]: Reached target timers.target - Timer Units. Jan 19 13:03:42.722227 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 19 13:03:42.729035 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 19 13:03:42.734517 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 19 13:03:42.735716 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 19 13:03:42.736454 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 19 13:03:42.745544 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 19 13:03:42.746994 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 19 13:03:42.748715 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 19 13:03:42.750455 systemd[1]: Reached target sockets.target - Socket Units. Jan 19 13:03:42.751129 systemd[1]: Reached target basic.target - Basic System. Jan 19 13:03:42.751820 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 19 13:03:42.751979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 19 13:03:42.753725 systemd[1]: Starting containerd.service - containerd container runtime... Jan 19 13:03:42.758004 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 19 13:03:42.760982 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 19 13:03:42.764654 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 19 13:03:42.768382 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 19 13:03:42.775006 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 19 13:03:42.776831 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 19 13:03:42.781033 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 19 13:03:42.784974 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 19 13:03:42.790965 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 19 13:03:42.792688 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:42.798655 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 19 13:03:42.811361 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 19 13:03:42.817285 oslogin_cache_refresh[1614]: Refreshing passwd entry cache Jan 19 13:03:42.818158 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Refreshing passwd entry cache Jan 19 13:03:42.823270 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 19 13:03:42.825796 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 19 13:03:42.833114 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 19 13:03:42.834794 systemd[1]: Starting update-engine.service - Update Engine... Jan 19 13:03:42.841196 jq[1612]: false Jan 19 13:03:42.845450 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Failure getting users, quitting Jan 19 13:03:42.845450 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 13:03:42.845450 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Refreshing group entry cache Jan 19 13:03:42.844596 oslogin_cache_refresh[1614]: Failure getting users, quitting Jan 19 13:03:42.844680 oslogin_cache_refresh[1614]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 19 13:03:42.844779 oslogin_cache_refresh[1614]: Refreshing group entry cache Jan 19 13:03:42.846107 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 19 13:03:42.849097 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Failure getting groups, quitting Jan 19 13:03:42.849097 google_oslogin_nss_cache[1614]: oslogin_cache_refresh[1614]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 13:03:42.848011 oslogin_cache_refresh[1614]: Failure getting groups, quitting Jan 19 13:03:42.848057 oslogin_cache_refresh[1614]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 19 13:03:42.861091 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 19 13:03:42.863775 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 19 13:03:42.864176 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 19 13:03:42.864609 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 19 13:03:42.866074 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 19 13:03:42.874342 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 19 13:03:42.875762 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 19 13:03:42.882452 extend-filesystems[1613]: Found /dev/vda6 Jan 19 13:03:42.888814 extend-filesystems[1613]: Found /dev/vda9 Jan 19 13:03:42.923002 extend-filesystems[1613]: Checking size of /dev/vda9 Jan 19 13:03:42.926487 jq[1623]: true Jan 19 13:03:42.933415 tar[1629]: linux-amd64/LICENSE Jan 19 13:03:42.936042 tar[1629]: linux-amd64/helm Jan 19 13:03:42.939514 systemd[1]: motdgen.service: Deactivated successfully. Jan 19 13:03:42.940223 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 19 13:03:42.941927 update_engine[1622]: I20260119 13:03:42.941834 1622 main.cc:92] Flatcar Update Engine starting Jan 19 13:03:43.027700 jq[1656]: true Jan 19 13:03:43.041948 extend-filesystems[1613]: Resized partition /dev/vda9 Jan 19 13:03:43.046278 extend-filesystems[1664]: resize2fs 1.47.3 (8-Jul-2025) Jan 19 13:03:43.064697 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 19 13:03:43.109709 dbus-daemon[1610]: [system] SELinux support is enabled Jan 19 13:03:43.115724 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 19 13:03:43.119869 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 19 13:03:43.119919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 19 13:03:43.120773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 19 13:03:43.120799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 19 13:03:43.145916 dbus-daemon[1610]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1557 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 19 13:03:43.151109 update_engine[1622]: I20260119 13:03:43.151039 1622 update_check_scheduler.cc:74] Next update check in 10m12s Jan 19 13:03:43.177655 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 19 13:03:43.178716 systemd[1]: Started update-engine.service - Update Engine. Jan 19 13:03:43.181461 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 19 13:03:43.265411 systemd-logind[1621]: Watching system buttons on /dev/input/event3 (Power Button) Jan 19 13:03:43.283357 systemd-logind[1621]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 19 13:03:43.284304 systemd-logind[1621]: New seat seat0. Jan 19 13:03:43.289006 systemd[1]: Started systemd-logind.service - User Login Management. Jan 19 13:03:43.323869 systemd-networkd[1557]: eth0: Gained IPv6LL Jan 19 13:03:43.327868 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Jan 19 13:03:43.334276 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 19 13:03:43.355129 systemd[1]: Reached target network-online.target - Network is Online. Jan 19 13:03:43.361758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:03:43.367430 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 19 13:03:43.368031 bash[1682]: Updated "/home/core/.ssh/authorized_keys" Jan 19 13:03:43.373774 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 19 13:03:43.380099 systemd[1]: Starting sshkeys.service... Jan 19 13:03:43.494405 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 19 13:03:43.502085 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 19 13:03:43.506767 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 19 13:03:43.532700 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:43.545693 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 19 13:03:43.619697 extend-filesystems[1664]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 19 13:03:43.619697 extend-filesystems[1664]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 19 13:03:43.619697 extend-filesystems[1664]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 19 13:03:43.639406 extend-filesystems[1613]: Resized filesystem in /dev/vda9 Jan 19 13:03:43.621951 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 19 13:03:43.622387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 19 13:03:43.672351 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 19 13:03:43.694099 dbus-daemon[1610]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 19 13:03:43.797934 dbus-daemon[1610]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1668 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 19 13:03:43.810128 systemd[1]: Starting polkit.service - Authorization Manager... Jan 19 13:03:43.843691 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:44.083060 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Jan 19 13:03:44.085515 systemd-networkd[1557]: eth0: Ignoring DHCPv6 address 2a02:1348:17c:d28b:24:19ff:fef3:4a2e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17c:d28b:24:19ff:fef3:4a2e/64 assigned by NDisc. Jan 19 13:03:44.085527 systemd-networkd[1557]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 19 13:03:44.091384 locksmithd[1673]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 19 13:03:44.201479 polkitd[1706]: Started polkitd version 126 Jan 19 13:03:44.209472 containerd[1646]: time="2026-01-19T13:03:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 19 13:03:44.214751 containerd[1646]: time="2026-01-19T13:03:44.213528257Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 19 13:03:44.221103 polkitd[1706]: Loading rules from directory /etc/polkit-1/rules.d Jan 19 13:03:44.221557 polkitd[1706]: Loading rules from directory /run/polkit-1/rules.d Jan 19 13:03:44.221649 polkitd[1706]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 19 13:03:44.223845 polkitd[1706]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 19 13:03:44.223897 polkitd[1706]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 19 13:03:44.223968 polkitd[1706]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 19 13:03:44.228967 polkitd[1706]: Finished loading, compiling and executing 2 rules Jan 19 13:03:44.229627 systemd[1]: Started polkit.service - Authorization Manager. Jan 19 13:03:44.242839 dbus-daemon[1610]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 19 13:03:44.250847 polkitd[1706]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.314968769Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="82.794µs" Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.315129102Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.315371494Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.315428544Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.317902956Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.317959160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.318118849Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.318145009Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.318607445Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.318644602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.319027567Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322164 containerd[1646]: time="2026-01-19T13:03:44.319051304Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.320805654Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.320829736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.321037054Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.321540762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.321638787Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.321660785Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 19 13:03:44.322815 containerd[1646]: time="2026-01-19T13:03:44.321826721Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 19 13:03:44.326316 containerd[1646]: time="2026-01-19T13:03:44.325482562Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 19 13:03:44.326316 containerd[1646]: time="2026-01-19T13:03:44.325661085Z" level=info msg="metadata content store policy set" policy=shared Jan 19 13:03:44.334813 containerd[1646]: time="2026-01-19T13:03:44.334721838Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 19 13:03:44.334931 containerd[1646]: time="2026-01-19T13:03:44.334849376Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 13:03:44.335099 containerd[1646]: time="2026-01-19T13:03:44.335025417Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 19 13:03:44.335157 containerd[1646]: time="2026-01-19T13:03:44.335131551Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 19 13:03:44.335193 containerd[1646]: time="2026-01-19T13:03:44.335167768Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 19 13:03:44.335257 containerd[1646]: time="2026-01-19T13:03:44.335195006Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 19 13:03:44.335257 containerd[1646]: time="2026-01-19T13:03:44.335223457Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 19 13:03:44.335257 containerd[1646]: time="2026-01-19T13:03:44.335242619Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 19 13:03:44.335368 containerd[1646]: time="2026-01-19T13:03:44.335281122Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 19 13:03:44.335368 containerd[1646]: time="2026-01-19T13:03:44.335314484Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 19 13:03:44.335445 containerd[1646]: time="2026-01-19T13:03:44.335378756Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 19 13:03:44.335445 containerd[1646]: time="2026-01-19T13:03:44.335401455Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 19 13:03:44.335546 containerd[1646]: time="2026-01-19T13:03:44.335453631Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 19 13:03:44.335546 containerd[1646]: time="2026-01-19T13:03:44.335481802Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.336849690Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.336955349Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.336986574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337024865Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337046728Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337078480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337111227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337148560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337175602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337213000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337239992Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337305461Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337502571Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337543297Z" level=info msg="Start snapshots syncer" Jan 19 13:03:44.338688 containerd[1646]: time="2026-01-19T13:03:44.337619565Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 19 13:03:44.340379 containerd[1646]: time="2026-01-19T13:03:44.339593733Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 19 13:03:44.340379 containerd[1646]: time="2026-01-19T13:03:44.339817657Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 19 13:03:44.340821 containerd[1646]: time="2026-01-19T13:03:44.340198821Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 19 13:03:44.340821 containerd[1646]: time="2026-01-19T13:03:44.340806351Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 19 13:03:44.340937 containerd[1646]: time="2026-01-19T13:03:44.340870493Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 19 13:03:44.340937 containerd[1646]: time="2026-01-19T13:03:44.340916821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 19 13:03:44.341005 containerd[1646]: time="2026-01-19T13:03:44.340939835Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 19 13:03:44.341005 containerd[1646]: time="2026-01-19T13:03:44.340975282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 19 13:03:44.341082 containerd[1646]: time="2026-01-19T13:03:44.341039758Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 19 13:03:44.341143 containerd[1646]: time="2026-01-19T13:03:44.341116752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 19 13:03:44.341186 containerd[1646]: time="2026-01-19T13:03:44.341162451Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 19 13:03:44.341273 containerd[1646]: time="2026-01-19T13:03:44.341219484Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341328373Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341370899Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341389317Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341418515Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341437377Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341469277Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 19 13:03:44.341566 containerd[1646]: time="2026-01-19T13:03:44.341539798Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 19 13:03:44.341877 containerd[1646]: time="2026-01-19T13:03:44.341631929Z" level=info msg="runtime interface created" Jan 19 13:03:44.341877 containerd[1646]: time="2026-01-19T13:03:44.341654754Z" level=info msg="created NRI interface" Jan 19 13:03:44.341877 containerd[1646]: time="2026-01-19T13:03:44.341702140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 19 13:03:44.341877 containerd[1646]: time="2026-01-19T13:03:44.341734528Z" level=info msg="Connect containerd service" Jan 19 13:03:44.341877 containerd[1646]: time="2026-01-19T13:03:44.341790689Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 19 13:03:44.345537 containerd[1646]: time="2026-01-19T13:03:44.345047563Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 19 13:03:44.435502 sshd_keygen[1657]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 19 13:03:44.435406 systemd-hostnamed[1668]: Hostname set to (static) Jan 19 13:03:44.509683 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 19 13:03:44.517057 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 19 13:03:44.574493 systemd[1]: issuegen.service: Deactivated successfully. Jan 19 13:03:44.574973 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 19 13:03:44.586182 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 19 13:03:44.714913 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 19 13:03:44.728310 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 19 13:03:44.736253 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 19 13:03:44.738078 systemd[1]: Reached target getty.target - Login Prompts. Jan 19 13:03:44.769331 containerd[1646]: time="2026-01-19T13:03:44.769245159Z" level=info msg="Start subscribing containerd event" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.769395082Z" level=info msg="Start recovering state" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.769840531Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.769946489Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.769982732Z" level=info msg="Start event monitor" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770016708Z" level=info msg="Start cni network conf syncer for default" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770038353Z" level=info msg="Start streaming server" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770069388Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770098307Z" level=info msg="runtime interface starting up..." Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770116960Z" level=info msg="starting plugins..." Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770154096Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 19 13:03:44.771439 containerd[1646]: time="2026-01-19T13:03:44.770403723Z" level=info msg="containerd successfully booted in 0.565584s" Jan 19 13:03:44.770644 systemd[1]: Started containerd.service - containerd container runtime. Jan 19 13:03:44.939023 tar[1629]: linux-amd64/README.md Jan 19 13:03:44.959099 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 19 13:03:45.088748 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:45.179799 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Jan 19 13:03:45.908098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:03:45.924223 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 13:03:46.072702 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:46.760710 kubelet[1761]: E0119 13:03:46.760624 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 13:03:46.764253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 13:03:46.764564 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 13:03:46.765765 systemd[1]: kubelet.service: Consumed 1.706s CPU time, 261.3M memory peak. Jan 19 13:03:47.112767 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:49.708537 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 19 13:03:49.712294 systemd[1]: Started sshd@0-10.243.74.46:22-188.166.92.220:56338.service - OpenSSH per-connection server daemon (188.166.92.220:56338). Jan 19 13:03:49.864710 sshd[1771]: Connection closed by authenticating user root 188.166.92.220 port 56338 [preauth] Jan 19 13:03:49.866576 login[1750]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:49.867617 systemd[1]: sshd@0-10.243.74.46:22-188.166.92.220:56338.service: Deactivated successfully. Jan 19 13:03:49.890030 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 19 13:03:49.892268 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 19 13:03:49.897787 systemd-logind[1621]: New session 1 of user core. Jan 19 13:03:49.931133 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 19 13:03:49.935109 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 19 13:03:49.957534 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:49.962852 systemd-logind[1621]: New session 2 of user core. Jan 19 13:03:50.089720 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:50.099730 coreos-metadata[1609]: Jan 19 13:03:50.099 WARN failed to locate config-drive, using the metadata service API instead Jan 19 13:03:50.130490 coreos-metadata[1609]: Jan 19 13:03:50.130 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 19 13:03:50.137595 coreos-metadata[1609]: Jan 19 13:03:50.137 INFO Fetch failed with 404: resource not found Jan 19 13:03:50.137595 coreos-metadata[1609]: Jan 19 13:03:50.137 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 19 13:03:50.138538 coreos-metadata[1609]: Jan 19 13:03:50.138 INFO Fetch successful Jan 19 13:03:50.138790 coreos-metadata[1609]: Jan 19 13:03:50.138 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 19 13:03:50.150929 systemd[1781]: Queued start job for default target default.target. Jan 19 13:03:50.155994 coreos-metadata[1609]: Jan 19 13:03:50.155 INFO Fetch successful Jan 19 13:03:50.155994 coreos-metadata[1609]: Jan 19 13:03:50.155 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 19 13:03:50.160181 systemd[1781]: Created slice app.slice - User Application Slice. Jan 19 13:03:50.160237 systemd[1781]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 19 13:03:50.160262 systemd[1781]: Reached target paths.target - Paths. Jan 19 13:03:50.160389 systemd[1781]: Reached target timers.target - Timers. Jan 19 13:03:50.164801 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 19 13:03:50.171382 systemd[1781]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 19 13:03:50.177762 coreos-metadata[1609]: Jan 19 13:03:50.177 INFO Fetch successful Jan 19 13:03:50.177975 coreos-metadata[1609]: Jan 19 13:03:50.177 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 19 13:03:50.191855 login[1751]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:50.193963 coreos-metadata[1609]: Jan 19 13:03:50.193 INFO Fetch successful Jan 19 13:03:50.194080 coreos-metadata[1609]: Jan 19 13:03:50.194 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 19 13:03:50.198209 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 19 13:03:50.198751 systemd[1781]: Reached target sockets.target - Sockets. Jan 19 13:03:50.205028 systemd-logind[1621]: New session 3 of user core. Jan 19 13:03:50.207587 systemd[1781]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 19 13:03:50.207789 systemd[1781]: Reached target basic.target - Basic System. Jan 19 13:03:50.207893 systemd[1781]: Reached target default.target - Main User Target. Jan 19 13:03:50.207962 systemd[1781]: Startup finished in 236ms. Jan 19 13:03:50.208518 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 19 13:03:50.213807 coreos-metadata[1609]: Jan 19 13:03:50.212 INFO Fetch successful Jan 19 13:03:50.220202 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 19 13:03:50.222360 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 19 13:03:50.265682 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 19 13:03:50.267714 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 19 13:03:51.132702 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 19 13:03:51.143694 coreos-metadata[1700]: Jan 19 13:03:51.142 WARN failed to locate config-drive, using the metadata service API instead Jan 19 13:03:51.168114 coreos-metadata[1700]: Jan 19 13:03:51.168 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 19 13:03:51.197311 coreos-metadata[1700]: Jan 19 13:03:51.197 INFO Fetch successful Jan 19 13:03:51.197757 coreos-metadata[1700]: Jan 19 13:03:51.197 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 19 13:03:51.227205 coreos-metadata[1700]: Jan 19 13:03:51.226 INFO Fetch successful Jan 19 13:03:51.229317 unknown[1700]: wrote ssh authorized keys file for user: core Jan 19 13:03:51.257305 update-ssh-keys[1825]: Updated "/home/core/.ssh/authorized_keys" Jan 19 13:03:51.259390 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 19 13:03:51.262303 systemd[1]: Finished sshkeys.service. Jan 19 13:03:51.263536 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 19 13:03:51.265798 systemd[1]: Startup finished in 3.891s (kernel) + 13.927s (initrd) + 13.397s (userspace) = 31.217s. Jan 19 13:03:52.824238 systemd[1]: Started sshd@1-10.243.74.46:22-68.220.241.50:46298.service - OpenSSH per-connection server daemon (68.220.241.50:46298). Jan 19 13:03:53.328703 sshd[1830]: Accepted publickey for core from 68.220.241.50 port 46298 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:53.331274 sshd-session[1830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:53.338291 systemd-logind[1621]: New session 4 of user core. Jan 19 13:03:53.345940 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 19 13:03:53.694399 systemd[1]: Started sshd@2-10.243.74.46:22-68.220.241.50:46312.service - OpenSSH per-connection server daemon (68.220.241.50:46312). Jan 19 13:03:54.202005 sshd[1837]: Accepted publickey for core from 68.220.241.50 port 46312 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:54.203971 sshd-session[1837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:54.211557 systemd-logind[1621]: New session 5 of user core. Jan 19 13:03:54.235077 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 19 13:03:54.476976 sshd[1841]: Connection closed by 68.220.241.50 port 46312 Jan 19 13:03:54.478024 sshd-session[1837]: pam_unix(sshd:session): session closed for user core Jan 19 13:03:54.484868 systemd[1]: sshd@2-10.243.74.46:22-68.220.241.50:46312.service: Deactivated successfully. Jan 19 13:03:54.487301 systemd[1]: session-5.scope: Deactivated successfully. Jan 19 13:03:54.489379 systemd-logind[1621]: Session 5 logged out. Waiting for processes to exit. Jan 19 13:03:54.491081 systemd-logind[1621]: Removed session 5. Jan 19 13:03:54.590026 systemd[1]: Started sshd@3-10.243.74.46:22-68.220.241.50:46324.service - OpenSSH per-connection server daemon (68.220.241.50:46324). Jan 19 13:03:55.120399 sshd[1847]: Accepted publickey for core from 68.220.241.50 port 46324 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:55.122341 sshd-session[1847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:55.129608 systemd-logind[1621]: New session 6 of user core. Jan 19 13:03:55.142965 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 19 13:03:55.397094 sshd[1851]: Connection closed by 68.220.241.50 port 46324 Jan 19 13:03:55.398090 sshd-session[1847]: pam_unix(sshd:session): session closed for user core Jan 19 13:03:55.403443 systemd[1]: sshd@3-10.243.74.46:22-68.220.241.50:46324.service: Deactivated successfully. Jan 19 13:03:55.406850 systemd[1]: session-6.scope: Deactivated successfully. Jan 19 13:03:55.409119 systemd-logind[1621]: Session 6 logged out. Waiting for processes to exit. Jan 19 13:03:55.411760 systemd-logind[1621]: Removed session 6. Jan 19 13:03:55.501096 systemd[1]: Started sshd@4-10.243.74.46:22-68.220.241.50:46338.service - OpenSSH per-connection server daemon (68.220.241.50:46338). Jan 19 13:03:56.002558 sshd[1857]: Accepted publickey for core from 68.220.241.50 port 46338 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:56.003569 sshd-session[1857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:56.010240 systemd-logind[1621]: New session 7 of user core. Jan 19 13:03:56.022987 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 19 13:03:56.278733 sshd[1861]: Connection closed by 68.220.241.50 port 46338 Jan 19 13:03:56.277834 sshd-session[1857]: pam_unix(sshd:session): session closed for user core Jan 19 13:03:56.284394 systemd[1]: sshd@4-10.243.74.46:22-68.220.241.50:46338.service: Deactivated successfully. Jan 19 13:03:56.287613 systemd[1]: session-7.scope: Deactivated successfully. Jan 19 13:03:56.289954 systemd-logind[1621]: Session 7 logged out. Waiting for processes to exit. Jan 19 13:03:56.291586 systemd-logind[1621]: Removed session 7. Jan 19 13:03:56.381723 systemd[1]: Started sshd@5-10.243.74.46:22-68.220.241.50:46354.service - OpenSSH per-connection server daemon (68.220.241.50:46354). Jan 19 13:03:56.782182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 19 13:03:56.786305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:03:56.888696 sshd[1867]: Accepted publickey for core from 68.220.241.50 port 46354 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:56.889493 sshd-session[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:56.898211 systemd-logind[1621]: New session 8 of user core. Jan 19 13:03:56.905900 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 19 13:03:57.100846 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:03:57.111398 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 13:03:57.137034 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 19 13:03:57.137560 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 13:03:57.145485 sudo[1875]: pam_unix(sudo:session): session closed for user root Jan 19 13:03:57.195524 kubelet[1880]: E0119 13:03:57.195444 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 13:03:57.199461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 13:03:57.199745 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 13:03:57.200897 systemd[1]: kubelet.service: Consumed 335ms CPU time, 109.4M memory peak. Jan 19 13:03:57.235939 sshd[1874]: Connection closed by 68.220.241.50 port 46354 Jan 19 13:03:57.234927 sshd-session[1867]: pam_unix(sshd:session): session closed for user core Jan 19 13:03:57.241325 systemd[1]: sshd@5-10.243.74.46:22-68.220.241.50:46354.service: Deactivated successfully. Jan 19 13:03:57.244150 systemd[1]: session-8.scope: Deactivated successfully. Jan 19 13:03:57.245888 systemd-logind[1621]: Session 8 logged out. Waiting for processes to exit. Jan 19 13:03:57.248158 systemd-logind[1621]: Removed session 8. Jan 19 13:03:57.337482 systemd[1]: Started sshd@6-10.243.74.46:22-68.220.241.50:46356.service - OpenSSH per-connection server daemon (68.220.241.50:46356). Jan 19 13:03:57.844635 sshd[1894]: Accepted publickey for core from 68.220.241.50 port 46356 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:57.845851 sshd-session[1894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:57.853927 systemd-logind[1621]: New session 9 of user core. Jan 19 13:03:57.863007 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 19 13:03:58.032346 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 19 13:03:58.032854 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 13:03:58.045497 sudo[1900]: pam_unix(sudo:session): session closed for user root Jan 19 13:03:58.055468 sudo[1899]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 19 13:03:58.056408 sudo[1899]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 13:03:58.069384 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 19 13:03:58.133000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 13:03:58.138769 kernel: kauditd_printk_skb: 68 callbacks suppressed Jan 19 13:03:58.138907 kernel: audit: type=1305 audit(1768827838.133:224): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 19 13:03:58.140219 augenrules[1924]: No rules Jan 19 13:03:58.133000 audit[1924]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5b873220 a2=420 a3=0 items=0 ppid=1905 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:03:58.144290 kernel: audit: type=1300 audit(1768827838.133:224): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd5b873220 a2=420 a3=0 items=0 ppid=1905 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:03:58.144804 systemd[1]: audit-rules.service: Deactivated successfully. Jan 19 13:03:58.145400 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 19 13:03:58.149729 sudo[1899]: pam_unix(sudo:session): session closed for user root Jan 19 13:03:58.133000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 13:03:58.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.154762 kernel: audit: type=1327 audit(1768827838.133:224): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 19 13:03:58.154849 kernel: audit: type=1130 audit(1768827838.144:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.158441 kernel: audit: type=1131 audit(1768827838.144:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.149000 audit[1899]: USER_END pid=1899 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.162223 kernel: audit: type=1106 audit(1768827838.149:227): pid=1899 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.149000 audit[1899]: CRED_DISP pid=1899 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.166240 kernel: audit: type=1104 audit(1768827838.149:228): pid=1899 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.241745 sshd[1898]: Connection closed by 68.220.241.50 port 46356 Jan 19 13:03:58.244025 sshd-session[1894]: pam_unix(sshd:session): session closed for user core Jan 19 13:03:58.246000 audit[1894]: USER_END pid=1894 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.253443 systemd[1]: sshd@6-10.243.74.46:22-68.220.241.50:46356.service: Deactivated successfully. Jan 19 13:03:58.246000 audit[1894]: CRED_DISP pid=1894 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.257190 kernel: audit: type=1106 audit(1768827838.246:229): pid=1894 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.257278 kernel: audit: type=1104 audit(1768827838.246:230): pid=1894 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.257853 systemd[1]: session-9.scope: Deactivated successfully. Jan 19 13:03:58.261151 kernel: audit: type=1131 audit(1768827838.253:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.243.74.46:22-68.220.241.50:46356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.243.74.46:22-68.220.241.50:46356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.259759 systemd-logind[1621]: Session 9 logged out. Waiting for processes to exit. Jan 19 13:03:58.264331 systemd-logind[1621]: Removed session 9. Jan 19 13:03:58.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.243.74.46:22-68.220.241.50:46368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:03:58.347098 systemd[1]: Started sshd@7-10.243.74.46:22-68.220.241.50:46368.service - OpenSSH per-connection server daemon (68.220.241.50:46368). Jan 19 13:03:58.847000 audit[1933]: USER_ACCT pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.850049 sshd[1933]: Accepted publickey for core from 68.220.241.50 port 46368 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:03:58.850000 audit[1933]: CRED_ACQ pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.850000 audit[1933]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe91432270 a2=3 a3=0 items=0 ppid=1 pid=1933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:03:58.850000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:03:58.851811 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:03:58.861515 systemd-logind[1621]: New session 10 of user core. Jan 19 13:03:58.867914 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 19 13:03:58.872000 audit[1933]: USER_START pid=1933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:58.875000 audit[1937]: CRED_ACQ pid=1937 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:03:59.037000 audit[1938]: USER_ACCT pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:59.038366 sudo[1938]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 19 13:03:59.037000 audit[1938]: CRED_REFR pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:59.038953 sudo[1938]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 19 13:03:59.038000 audit[1938]: USER_START pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:03:59.829948 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 19 13:03:59.852654 (dockerd)[1957]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 19 13:04:00.436082 dockerd[1957]: time="2026-01-19T13:04:00.436009414Z" level=info msg="Starting up" Jan 19 13:04:00.438369 dockerd[1957]: time="2026-01-19T13:04:00.438331685Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 19 13:04:00.462312 dockerd[1957]: time="2026-01-19T13:04:00.462255849Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 19 13:04:00.532223 dockerd[1957]: time="2026-01-19T13:04:00.531928322Z" level=info msg="Loading containers: start." Jan 19 13:04:00.556703 kernel: Initializing XFRM netlink socket Jan 19 13:04:00.717000 audit[2007]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.717000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffca2acdfe0 a2=0 a3=0 items=0 ppid=1957 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.717000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 13:04:00.721000 audit[2009]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.721000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe1587b710 a2=0 a3=0 items=0 ppid=1957 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.721000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 13:04:00.724000 audit[2011]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.724000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc226b5790 a2=0 a3=0 items=0 ppid=1957 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.724000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 13:04:00.728000 audit[2013]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.728000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9beff8c0 a2=0 a3=0 items=0 ppid=1957 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.728000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 13:04:00.731000 audit[2015]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.731000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff2710c300 a2=0 a3=0 items=0 ppid=1957 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.731000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 13:04:00.734000 audit[2017]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.734000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc054c4160 a2=0 a3=0 items=0 ppid=1957 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.734000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 13:04:00.738000 audit[2019]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.738000 audit[2019]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe83d0c8d0 a2=0 a3=0 items=0 ppid=1957 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 13:04:00.741000 audit[2021]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.741000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff79e4d9f0 a2=0 a3=0 items=0 ppid=1957 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.741000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 13:04:00.784000 audit[2024]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.784000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff51aaedf0 a2=0 a3=0 items=0 ppid=1957 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 19 13:04:00.788000 audit[2026]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.788000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd45e33300 a2=0 a3=0 items=0 ppid=1957 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.788000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 13:04:00.791000 audit[2028]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.791000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffeda6b3d30 a2=0 a3=0 items=0 ppid=1957 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 13:04:00.797000 audit[2030]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.797000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff35abcc80 a2=0 a3=0 items=0 ppid=1957 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.797000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 13:04:00.801000 audit[2032]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.801000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc131cc7a0 a2=0 a3=0 items=0 ppid=1957 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 13:04:00.856000 audit[2062]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.856000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffeb9c8c630 a2=0 a3=0 items=0 ppid=1957 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.856000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 19 13:04:00.859000 audit[2064]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.859000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc782598a0 a2=0 a3=0 items=0 ppid=1957 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 19 13:04:00.862000 audit[2066]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.862000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffbd87320 a2=0 a3=0 items=0 ppid=1957 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 19 13:04:00.866000 audit[2068]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.866000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8e6b2950 a2=0 a3=0 items=0 ppid=1957 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 19 13:04:00.869000 audit[2070]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.869000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff22caf940 a2=0 a3=0 items=0 ppid=1957 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 19 13:04:00.872000 audit[2072]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.872000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe61a284c0 a2=0 a3=0 items=0 ppid=1957 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 13:04:00.876000 audit[2074]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.876000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffefd18bf40 a2=0 a3=0 items=0 ppid=1957 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 13:04:00.879000 audit[2076]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.879000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc7e4bab50 a2=0 a3=0 items=0 ppid=1957 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.879000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 19 13:04:00.883000 audit[2078]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.883000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffd0a6ff700 a2=0 a3=0 items=0 ppid=1957 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 19 13:04:00.887000 audit[2080]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.887000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc65f163e0 a2=0 a3=0 items=0 ppid=1957 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.887000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 19 13:04:00.890000 audit[2082]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.890000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff557077e0 a2=0 a3=0 items=0 ppid=1957 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 19 13:04:00.894000 audit[2084]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.894000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffe56fbf100 a2=0 a3=0 items=0 ppid=1957 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.894000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 19 13:04:00.905000 audit[2086]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.905000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7fff4a4594e0 a2=0 a3=0 items=0 ppid=1957 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 19 13:04:00.914000 audit[2091]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.914000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd785c8240 a2=0 a3=0 items=0 ppid=1957 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 13:04:00.918000 audit[2093]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2093 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.918000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff44ef37c0 a2=0 a3=0 items=0 ppid=1957 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 13:04:00.922000 audit[2095]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.922000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe2f380d30 a2=0 a3=0 items=0 ppid=1957 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.922000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 13:04:00.926000 audit[2097]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.926000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdae34a380 a2=0 a3=0 items=0 ppid=1957 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 19 13:04:00.930000 audit[2099]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.930000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd99971a40 a2=0 a3=0 items=0 ppid=1957 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 19 13:04:00.934000 audit[2101]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:00.934000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd12544aa0 a2=0 a3=0 items=0 ppid=1957 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.934000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 19 13:04:00.945536 systemd-timesyncd[1535]: Network configuration changed, trying to establish connection. Jan 19 13:04:00.959000 audit[2105]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.959000 audit[2105]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd9b8d1950 a2=0 a3=0 items=0 ppid=1957 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 19 13:04:00.963000 audit[2107]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.963000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe697f6710 a2=0 a3=0 items=0 ppid=1957 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 19 13:04:00.976000 audit[2115]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.976000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffc85f1aec0 a2=0 a3=0 items=0 ppid=1957 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.976000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 19 13:04:00.992000 audit[2121]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.992000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd2e5f5180 a2=0 a3=0 items=0 ppid=1957 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 19 13:04:00.996000 audit[2123]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:00.996000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffca98b5e40 a2=0 a3=0 items=0 ppid=1957 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:00.996000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 19 13:04:01.000000 audit[2125]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:01.000000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdbf7823f0 a2=0 a3=0 items=0 ppid=1957 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:01.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 19 13:04:01.003000 audit[2127]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:01.003000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd5a419cf0 a2=0 a3=0 items=0 ppid=1957 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:01.003000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 19 13:04:01.007000 audit[2129]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:01.007000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffde86e0bd0 a2=0 a3=0 items=0 ppid=1957 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:01.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 19 13:04:01.008515 systemd-networkd[1557]: docker0: Link UP Jan 19 13:04:01.013970 dockerd[1957]: time="2026-01-19T13:04:01.013829277Z" level=info msg="Loading containers: done." Jan 19 13:04:01.045216 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck374641420-merged.mount: Deactivated successfully. Jan 19 13:04:01.054326 dockerd[1957]: time="2026-01-19T13:04:01.054245813Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 19 13:04:01.054538 dockerd[1957]: time="2026-01-19T13:04:01.054401231Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 19 13:04:01.054599 dockerd[1957]: time="2026-01-19T13:04:01.054581416Z" level=info msg="Initializing buildkit" Jan 19 13:04:01.676272 systemd-timesyncd[1535]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Jan 19 13:04:01.676411 systemd-timesyncd[1535]: Initial clock synchronization to Mon 2026-01-19 13:04:01.675947 UTC. Jan 19 13:04:01.677001 systemd-resolved[1303]: Clock change detected. Flushing caches. Jan 19 13:04:01.690912 dockerd[1957]: time="2026-01-19T13:04:01.690844848Z" level=info msg="Completed buildkit initialization" Jan 19 13:04:01.700848 dockerd[1957]: time="2026-01-19T13:04:01.700740535Z" level=info msg="Daemon has completed initialization" Jan 19 13:04:01.701468 dockerd[1957]: time="2026-01-19T13:04:01.701050577Z" level=info msg="API listen on /run/docker.sock" Jan 19 13:04:01.701220 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 19 13:04:01.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:02.930854 containerd[1646]: time="2026-01-19T13:04:02.926985206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 19 13:04:04.222407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1125724306.mount: Deactivated successfully. Jan 19 13:04:06.465617 containerd[1646]: time="2026-01-19T13:04:06.465476462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:06.467382 containerd[1646]: time="2026-01-19T13:04:06.467337661Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 19 13:04:06.472468 containerd[1646]: time="2026-01-19T13:04:06.471806824Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:06.476849 containerd[1646]: time="2026-01-19T13:04:06.476761923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:06.478586 containerd[1646]: time="2026-01-19T13:04:06.478118920Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.548504916s" Jan 19 13:04:06.478586 containerd[1646]: time="2026-01-19T13:04:06.478196267Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 19 13:04:06.480664 containerd[1646]: time="2026-01-19T13:04:06.480634038Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 19 13:04:07.848158 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 19 13:04:07.853310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:08.270128 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:08.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:08.283018 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 19 13:04:08.283113 kernel: audit: type=1130 audit(1768827848.268:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:08.305380 (kubelet)[2241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 13:04:08.414998 kubelet[2241]: E0119 13:04:08.414910 2241 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 13:04:08.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 13:04:08.421558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 13:04:08.421833 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 13:04:08.422539 systemd[1]: kubelet.service: Consumed 442ms CPU time, 110.6M memory peak. Jan 19 13:04:08.426870 kernel: audit: type=1131 audit(1768827848.420:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 13:04:09.576110 containerd[1646]: time="2026-01-19T13:04:09.576025866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:09.579164 containerd[1646]: time="2026-01-19T13:04:09.578993911Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 19 13:04:09.580119 containerd[1646]: time="2026-01-19T13:04:09.580063941Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:09.586845 containerd[1646]: time="2026-01-19T13:04:09.585187004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:09.590690 containerd[1646]: time="2026-01-19T13:04:09.590649207Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.10987877s" Jan 19 13:04:09.590849 containerd[1646]: time="2026-01-19T13:04:09.590795844Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 19 13:04:09.593505 containerd[1646]: time="2026-01-19T13:04:09.593460097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 19 13:04:11.922934 containerd[1646]: time="2026-01-19T13:04:11.922855341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:11.924531 containerd[1646]: time="2026-01-19T13:04:11.924247937Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 19 13:04:11.925349 containerd[1646]: time="2026-01-19T13:04:11.925306254Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:11.928985 containerd[1646]: time="2026-01-19T13:04:11.928922226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:11.930556 containerd[1646]: time="2026-01-19T13:04:11.930514635Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.337009341s" Jan 19 13:04:11.930556 containerd[1646]: time="2026-01-19T13:04:11.930555349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 19 13:04:11.931955 containerd[1646]: time="2026-01-19T13:04:11.931099594Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 19 13:04:13.871807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1465150961.mount: Deactivated successfully. Jan 19 13:04:15.073117 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 19 13:04:15.084936 kernel: audit: type=1131 audit(1768827855.073:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:15.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:15.097000 audit: BPF prog-id=61 op=UNLOAD Jan 19 13:04:15.100902 kernel: audit: type=1334 audit(1768827855.097:285): prog-id=61 op=UNLOAD Jan 19 13:04:15.109197 containerd[1646]: time="2026-01-19T13:04:15.108134950Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:15.110363 containerd[1646]: time="2026-01-19T13:04:15.110310652Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 19 13:04:15.111298 containerd[1646]: time="2026-01-19T13:04:15.111217899Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:15.114791 containerd[1646]: time="2026-01-19T13:04:15.114737625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:15.116756 containerd[1646]: time="2026-01-19T13:04:15.116669387Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.185529558s" Jan 19 13:04:15.116756 containerd[1646]: time="2026-01-19T13:04:15.116722501Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 19 13:04:15.117642 containerd[1646]: time="2026-01-19T13:04:15.117541773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 19 13:04:16.365677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1570152147.mount: Deactivated successfully. Jan 19 13:04:17.794854 containerd[1646]: time="2026-01-19T13:04:17.794081347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:17.796022 containerd[1646]: time="2026-01-19T13:04:17.795983699Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 19 13:04:17.797250 containerd[1646]: time="2026-01-19T13:04:17.797218100Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:17.801213 containerd[1646]: time="2026-01-19T13:04:17.801180271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:17.803052 containerd[1646]: time="2026-01-19T13:04:17.802994183Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.685386388s" Jan 19 13:04:17.803152 containerd[1646]: time="2026-01-19T13:04:17.803056626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 19 13:04:17.804833 containerd[1646]: time="2026-01-19T13:04:17.804565059Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 19 13:04:18.597997 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 19 13:04:18.601300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:18.868693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:18.873836 kernel: audit: type=1130 audit(1768827858.867:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:18.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:18.895648 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 19 13:04:18.995714 kubelet[2323]: E0119 13:04:18.995651 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 19 13:04:18.999685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 19 13:04:19.000373 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 19 13:04:19.001086 systemd[1]: kubelet.service: Consumed 265ms CPU time, 111.5M memory peak. Jan 19 13:04:18.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 13:04:19.009850 kernel: audit: type=1131 audit(1768827858.999:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 19 13:04:19.019111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1646442286.mount: Deactivated successfully. Jan 19 13:04:19.027845 containerd[1646]: time="2026-01-19T13:04:19.026988449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 13:04:19.028526 containerd[1646]: time="2026-01-19T13:04:19.028256181Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 19 13:04:19.029762 containerd[1646]: time="2026-01-19T13:04:19.029323328Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 13:04:19.031987 containerd[1646]: time="2026-01-19T13:04:19.031924770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 19 13:04:19.033190 containerd[1646]: time="2026-01-19T13:04:19.032975964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.228360203s" Jan 19 13:04:19.033190 containerd[1646]: time="2026-01-19T13:04:19.033027992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 19 13:04:19.034840 containerd[1646]: time="2026-01-19T13:04:19.034410390Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 19 13:04:20.124296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274653129.mount: Deactivated successfully. Jan 19 13:04:21.922439 systemd[1]: Started sshd@8-10.243.74.46:22-188.166.92.220:54384.service - OpenSSH per-connection server daemon (188.166.92.220:54384). Jan 19 13:04:21.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.243.74.46:22-188.166.92.220:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:21.934902 kernel: audit: type=1130 audit(1768827861.920:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.243.74.46:22-188.166.92.220:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:22.145879 sshd[2387]: Connection closed by authenticating user root 188.166.92.220 port 54384 [preauth] Jan 19 13:04:22.145000 audit[2387]: USER_ERR pid=2387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:04:22.158858 kernel: audit: type=1109 audit(1768827862.145:289): pid=2387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:04:22.160137 systemd[1]: sshd@8-10.243.74.46:22-188.166.92.220:54384.service: Deactivated successfully. Jan 19 13:04:22.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.243.74.46:22-188.166.92.220:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:22.167127 kernel: audit: type=1131 audit(1768827862.158:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.243.74.46:22-188.166.92.220:54384 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:24.174439 containerd[1646]: time="2026-01-19T13:04:24.174286558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:24.176738 containerd[1646]: time="2026-01-19T13:04:24.176704734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 19 13:04:24.177095 containerd[1646]: time="2026-01-19T13:04:24.177033319Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:24.183586 containerd[1646]: time="2026-01-19T13:04:24.183546223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:24.185735 containerd[1646]: time="2026-01-19T13:04:24.185695695Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.151097609s" Jan 19 13:04:24.185872 containerd[1646]: time="2026-01-19T13:04:24.185760897Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 19 13:04:28.211053 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:28.211335 systemd[1]: kubelet.service: Consumed 265ms CPU time, 111.5M memory peak. Jan 19 13:04:28.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:28.224637 kernel: audit: type=1130 audit(1768827868.209:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:28.224766 kernel: audit: type=1131 audit(1768827868.209:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:28.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:28.221066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:28.269110 systemd[1]: Reload requested from client PID 2421 ('systemctl') (unit session-10.scope)... Jan 19 13:04:28.269178 systemd[1]: Reloading... Jan 19 13:04:28.503854 zram_generator::config[2468]: No configuration found. Jan 19 13:04:28.652483 update_engine[1622]: I20260119 13:04:28.652338 1622 update_attempter.cc:509] Updating boot flags... Jan 19 13:04:28.906582 systemd[1]: Reloading finished in 636 ms. Jan 19 13:04:28.931000 audit: BPF prog-id=65 op=LOAD Jan 19 13:04:28.944957 kernel: audit: type=1334 audit(1768827868.931:293): prog-id=65 op=LOAD Jan 19 13:04:28.931000 audit: BPF prog-id=51 op=UNLOAD Jan 19 13:04:28.931000 audit: BPF prog-id=66 op=LOAD Jan 19 13:04:28.932000 audit: BPF prog-id=67 op=LOAD Jan 19 13:04:28.948000 audit: BPF prog-id=52 op=UNLOAD Jan 19 13:04:28.948000 audit: BPF prog-id=53 op=UNLOAD Jan 19 13:04:28.949000 audit: BPF prog-id=68 op=LOAD Jan 19 13:04:28.953862 kernel: audit: type=1334 audit(1768827868.931:294): prog-id=51 op=UNLOAD Jan 19 13:04:28.953953 kernel: audit: type=1334 audit(1768827868.931:295): prog-id=66 op=LOAD Jan 19 13:04:28.954007 kernel: audit: type=1334 audit(1768827868.932:296): prog-id=67 op=LOAD Jan 19 13:04:28.954120 kernel: audit: type=1334 audit(1768827868.948:297): prog-id=52 op=UNLOAD Jan 19 13:04:28.954171 kernel: audit: type=1334 audit(1768827868.948:298): prog-id=53 op=UNLOAD Jan 19 13:04:28.954230 kernel: audit: type=1334 audit(1768827868.949:299): prog-id=68 op=LOAD Jan 19 13:04:28.954276 kernel: audit: type=1334 audit(1768827868.950:300): prog-id=69 op=LOAD Jan 19 13:04:28.950000 audit: BPF prog-id=69 op=LOAD Jan 19 13:04:28.950000 audit: BPF prog-id=54 op=UNLOAD Jan 19 13:04:28.950000 audit: BPF prog-id=55 op=UNLOAD Jan 19 13:04:28.951000 audit: BPF prog-id=70 op=LOAD Jan 19 13:04:28.951000 audit: BPF prog-id=44 op=UNLOAD Jan 19 13:04:28.951000 audit: BPF prog-id=71 op=LOAD Jan 19 13:04:28.951000 audit: BPF prog-id=72 op=LOAD Jan 19 13:04:28.951000 audit: BPF prog-id=45 op=UNLOAD Jan 19 13:04:28.951000 audit: BPF prog-id=46 op=UNLOAD Jan 19 13:04:28.954000 audit: BPF prog-id=73 op=LOAD Jan 19 13:04:28.954000 audit: BPF prog-id=56 op=UNLOAD Jan 19 13:04:28.957000 audit: BPF prog-id=74 op=LOAD Jan 19 13:04:28.957000 audit: BPF prog-id=64 op=UNLOAD Jan 19 13:04:28.959000 audit: BPF prog-id=75 op=LOAD Jan 19 13:04:28.959000 audit: BPF prog-id=41 op=UNLOAD Jan 19 13:04:28.959000 audit: BPF prog-id=76 op=LOAD Jan 19 13:04:28.959000 audit: BPF prog-id=77 op=LOAD Jan 19 13:04:28.959000 audit: BPF prog-id=42 op=UNLOAD Jan 19 13:04:28.959000 audit: BPF prog-id=43 op=UNLOAD Jan 19 13:04:28.962000 audit: BPF prog-id=78 op=LOAD Jan 19 13:04:28.962000 audit: BPF prog-id=57 op=UNLOAD Jan 19 13:04:28.964000 audit: BPF prog-id=79 op=LOAD Jan 19 13:04:28.964000 audit: BPF prog-id=47 op=UNLOAD Jan 19 13:04:28.966000 audit: BPF prog-id=80 op=LOAD Jan 19 13:04:28.972000 audit: BPF prog-id=58 op=UNLOAD Jan 19 13:04:28.972000 audit: BPF prog-id=81 op=LOAD Jan 19 13:04:28.972000 audit: BPF prog-id=82 op=LOAD Jan 19 13:04:28.972000 audit: BPF prog-id=59 op=UNLOAD Jan 19 13:04:28.972000 audit: BPF prog-id=60 op=UNLOAD Jan 19 13:04:28.977000 audit: BPF prog-id=83 op=LOAD Jan 19 13:04:28.977000 audit: BPF prog-id=48 op=UNLOAD Jan 19 13:04:28.977000 audit: BPF prog-id=84 op=LOAD Jan 19 13:04:28.977000 audit: BPF prog-id=85 op=LOAD Jan 19 13:04:28.977000 audit: BPF prog-id=49 op=UNLOAD Jan 19 13:04:28.977000 audit: BPF prog-id=50 op=UNLOAD Jan 19 13:04:29.014436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:29.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:29.032145 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 13:04:29.081748 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:29.093748 systemd[1]: kubelet.service: Deactivated successfully. Jan 19 13:04:29.094204 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:29.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:29.094295 systemd[1]: kubelet.service: Consumed 351ms CPU time, 104.2M memory peak. Jan 19 13:04:29.098961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:29.310924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:29.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:29.323306 (kubelet)[2559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 13:04:29.401348 kubelet[2559]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 13:04:29.401348 kubelet[2559]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 13:04:29.401348 kubelet[2559]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 13:04:29.403793 kubelet[2559]: I0119 13:04:29.403695 2559 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 13:04:30.040172 kubelet[2559]: I0119 13:04:30.040067 2559 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 19 13:04:30.040172 kubelet[2559]: I0119 13:04:30.040118 2559 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 13:04:30.040533 kubelet[2559]: I0119 13:04:30.040490 2559 server.go:954] "Client rotation is on, will bootstrap in background" Jan 19 13:04:30.082260 kubelet[2559]: E0119 13:04:30.082180 2559 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.243.74.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:30.084304 kubelet[2559]: I0119 13:04:30.084047 2559 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 13:04:30.112596 kubelet[2559]: I0119 13:04:30.112544 2559 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 13:04:30.123867 kubelet[2559]: I0119 13:04:30.123835 2559 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 19 13:04:30.126404 kubelet[2559]: I0119 13:04:30.125943 2559 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 13:04:30.126404 kubelet[2559]: I0119 13:04:30.125999 2559 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hsmf0.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 13:04:30.128522 kubelet[2559]: I0119 13:04:30.128496 2559 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 13:04:30.128644 kubelet[2559]: I0119 13:04:30.128626 2559 container_manager_linux.go:304] "Creating device plugin manager" Jan 19 13:04:30.131155 kubelet[2559]: I0119 13:04:30.131132 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 19 13:04:30.135221 kubelet[2559]: I0119 13:04:30.135196 2559 kubelet.go:446] "Attempting to sync node with API server" Jan 19 13:04:30.135389 kubelet[2559]: I0119 13:04:30.135355 2559 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 13:04:30.137354 kubelet[2559]: I0119 13:04:30.137213 2559 kubelet.go:352] "Adding apiserver pod source" Jan 19 13:04:30.137354 kubelet[2559]: I0119 13:04:30.137267 2559 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 13:04:30.147069 kubelet[2559]: W0119 13:04:30.146951 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.74.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hsmf0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:30.147069 kubelet[2559]: E0119 13:04:30.147063 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.74.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hsmf0.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:30.149511 kubelet[2559]: I0119 13:04:30.148749 2559 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 13:04:30.152640 kubelet[2559]: I0119 13:04:30.152611 2559 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 19 13:04:30.159838 kubelet[2559]: W0119 13:04:30.159244 2559 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 19 13:04:30.167353 kubelet[2559]: I0119 13:04:30.167249 2559 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 19 13:04:30.167353 kubelet[2559]: I0119 13:04:30.167312 2559 server.go:1287] "Started kubelet" Jan 19 13:04:30.167918 kubelet[2559]: W0119 13:04:30.167871 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.74.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:30.168101 kubelet[2559]: E0119 13:04:30.168067 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.74.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:30.168295 kubelet[2559]: I0119 13:04:30.168257 2559 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 13:04:30.171780 kubelet[2559]: I0119 13:04:30.171673 2559 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 13:04:30.172916 kubelet[2559]: I0119 13:04:30.172448 2559 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 13:04:30.177098 kubelet[2559]: E0119 13:04:30.173698 2559 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.243.74.46:6443/api/v1/namespaces/default/events\": dial tcp 10.243.74.46:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-hsmf0.gb1.brightbox.com.188c23938621a6ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hsmf0.gb1.brightbox.com,UID:srv-hsmf0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hsmf0.gb1.brightbox.com,},FirstTimestamp:2026-01-19 13:04:30.167279342 +0000 UTC m=+0.836653558,LastTimestamp:2026-01-19 13:04:30.167279342 +0000 UTC m=+0.836653558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hsmf0.gb1.brightbox.com,}" Jan 19 13:04:30.182630 kubelet[2559]: I0119 13:04:30.181114 2559 server.go:479] "Adding debug handlers to kubelet server" Jan 19 13:04:30.182630 kubelet[2559]: I0119 13:04:30.181303 2559 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 13:04:30.185175 kubelet[2559]: I0119 13:04:30.184799 2559 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 13:04:30.191346 kubelet[2559]: E0119 13:04:30.191131 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hsmf0.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.46:6443: connect: connection refused" interval="200ms" Jan 19 13:04:30.191594 kubelet[2559]: E0119 13:04:30.191544 2559 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" Jan 19 13:04:30.191737 kubelet[2559]: I0119 13:04:30.191614 2559 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 19 13:04:30.192843 kubelet[2559]: I0119 13:04:30.191802 2559 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 19 13:04:30.192843 kubelet[2559]: I0119 13:04:30.191959 2559 reconciler.go:26] "Reconciler: start to sync state" Jan 19 13:04:30.192843 kubelet[2559]: W0119 13:04:30.192423 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.74.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:30.192843 kubelet[2559]: E0119 13:04:30.192475 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.74.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:30.194521 kubelet[2559]: E0119 13:04:30.194494 2559 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 19 13:04:30.196462 kubelet[2559]: I0119 13:04:30.196433 2559 factory.go:221] Registration of the containerd container factory successfully Jan 19 13:04:30.196462 kubelet[2559]: I0119 13:04:30.196457 2559 factory.go:221] Registration of the systemd container factory successfully Jan 19 13:04:30.196584 kubelet[2559]: I0119 13:04:30.196545 2559 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 13:04:30.195000 audit[2571]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.195000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc0f589a80 a2=0 a3=0 items=0 ppid=2559 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.195000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 13:04:30.208000 audit[2573]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.208000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2d4560d0 a2=0 a3=0 items=0 ppid=2559 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 13:04:30.215000 audit[2575]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.215000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff6fc55090 a2=0 a3=0 items=0 ppid=2559 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.215000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 13:04:30.219000 audit[2577]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.219000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffeb4b94990 a2=0 a3=0 items=0 ppid=2559 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 13:04:30.230580 kubelet[2559]: I0119 13:04:30.230142 2559 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 13:04:30.230580 kubelet[2559]: I0119 13:04:30.230182 2559 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 13:04:30.230580 kubelet[2559]: I0119 13:04:30.230225 2559 state_mem.go:36] "Initialized new in-memory state store" Jan 19 13:04:30.231000 audit[2582]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.231000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffef1379970 a2=0 a3=0 items=0 ppid=2559 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.231000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 19 13:04:30.234124 kubelet[2559]: I0119 13:04:30.234052 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 19 13:04:30.234734 kubelet[2559]: I0119 13:04:30.234394 2559 policy_none.go:49] "None policy: Start" Jan 19 13:04:30.234734 kubelet[2559]: I0119 13:04:30.234436 2559 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 19 13:04:30.234734 kubelet[2559]: I0119 13:04:30.234473 2559 state_mem.go:35] "Initializing new in-memory state store" Jan 19 13:04:30.235000 audit[2583]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:30.235000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcd2758ea0 a2=0 a3=0 items=0 ppid=2559 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.235000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 19 13:04:30.237000 audit[2584]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.237000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcfe6754c0 a2=0 a3=0 items=0 ppid=2559 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.237000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 13:04:30.240196 kubelet[2559]: I0119 13:04:30.239896 2559 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 19 13:04:30.240196 kubelet[2559]: I0119 13:04:30.239958 2559 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 19 13:04:30.240196 kubelet[2559]: I0119 13:04:30.240001 2559 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 13:04:30.240897 kubelet[2559]: I0119 13:04:30.240877 2559 kubelet.go:2382] "Starting kubelet main sync loop" Jan 19 13:04:30.241082 kubelet[2559]: E0119 13:04:30.241056 2559 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 13:04:30.239000 audit[2585]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.239000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8241ba50 a2=0 a3=0 items=0 ppid=2559 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 13:04:30.241000 audit[2586]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:30.241000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf4a43d00 a2=0 a3=0 items=0 ppid=2559 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 19 13:04:30.242000 audit[2587]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:30.242000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf44b3e50 a2=0 a3=0 items=0 ppid=2559 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.242000 audit[2588]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:30.242000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe74f228a0 a2=0 a3=0 items=0 ppid=2559 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 19 13:04:30.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 13:04:30.247059 kubelet[2559]: W0119 13:04:30.246477 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.74.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:30.247059 kubelet[2559]: E0119 13:04:30.246526 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.74.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:30.246000 audit[2589]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:30.246000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee2847df0 a2=0 a3=0 items=0 ppid=2559 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:30.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 19 13:04:30.254509 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 19 13:04:30.270371 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 19 13:04:30.276785 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 19 13:04:30.293873 kubelet[2559]: E0119 13:04:30.292424 2559 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" Jan 19 13:04:30.296326 kubelet[2559]: I0119 13:04:30.296050 2559 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 19 13:04:30.296326 kubelet[2559]: I0119 13:04:30.296322 2559 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 13:04:30.296463 kubelet[2559]: I0119 13:04:30.296347 2559 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 13:04:30.297314 kubelet[2559]: I0119 13:04:30.297068 2559 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 13:04:30.300850 kubelet[2559]: E0119 13:04:30.300139 2559 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 13:04:30.300850 kubelet[2559]: E0119 13:04:30.300222 2559 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-hsmf0.gb1.brightbox.com\" not found" Jan 19 13:04:30.360123 systemd[1]: Created slice kubepods-burstable-pod6bd5be42d6aec9c04dbc44183a216064.slice - libcontainer container kubepods-burstable-pod6bd5be42d6aec9c04dbc44183a216064.slice. Jan 19 13:04:30.372213 kubelet[2559]: E0119 13:04:30.372167 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.376852 systemd[1]: Created slice kubepods-burstable-pod3f368ea403e060ce797658771165efe9.slice - libcontainer container kubepods-burstable-pod3f368ea403e060ce797658771165efe9.slice. Jan 19 13:04:30.381235 kubelet[2559]: E0119 13:04:30.381195 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.385271 systemd[1]: Created slice kubepods-burstable-pod07293d87a954ad5874790c4ea094ae13.slice - libcontainer container kubepods-burstable-pod07293d87a954ad5874790c4ea094ae13.slice. Jan 19 13:04:30.388578 kubelet[2559]: E0119 13:04:30.388282 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.391726 kubelet[2559]: E0119 13:04:30.391690 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hsmf0.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.46:6443: connect: connection refused" interval="400ms" Jan 19 13:04:30.402837 kubelet[2559]: I0119 13:04:30.402531 2559 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.403363 kubelet[2559]: E0119 13:04:30.402988 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.74.46:6443/api/v1/nodes\": dial tcp 10.243.74.46:6443: connect: connection refused" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493688 kubelet[2559]: I0119 13:04:30.493623 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-ca-certs\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493688 kubelet[2559]: I0119 13:04:30.493695 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493973 kubelet[2559]: I0119 13:04:30.493740 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-flexvolume-dir\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493973 kubelet[2559]: I0119 13:04:30.493767 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-k8s-certs\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493973 kubelet[2559]: I0119 13:04:30.493794 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-kubeconfig\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493973 kubelet[2559]: I0119 13:04:30.493847 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.493973 kubelet[2559]: I0119 13:04:30.493881 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07293d87a954ad5874790c4ea094ae13-kubeconfig\") pod \"kube-scheduler-srv-hsmf0.gb1.brightbox.com\" (UID: \"07293d87a954ad5874790c4ea094ae13\") " pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.494280 kubelet[2559]: I0119 13:04:30.493909 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-k8s-certs\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.494280 kubelet[2559]: I0119 13:04:30.493939 2559 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-ca-certs\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.606322 kubelet[2559]: I0119 13:04:30.605930 2559 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.606633 kubelet[2559]: E0119 13:04:30.606601 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.74.46:6443/api/v1/nodes\": dial tcp 10.243.74.46:6443: connect: connection refused" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:30.675387 containerd[1646]: time="2026-01-19T13:04:30.675329040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hsmf0.gb1.brightbox.com,Uid:6bd5be42d6aec9c04dbc44183a216064,Namespace:kube-system,Attempt:0,}" Jan 19 13:04:30.683336 containerd[1646]: time="2026-01-19T13:04:30.683279869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hsmf0.gb1.brightbox.com,Uid:3f368ea403e060ce797658771165efe9,Namespace:kube-system,Attempt:0,}" Jan 19 13:04:30.690673 containerd[1646]: time="2026-01-19T13:04:30.690609101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hsmf0.gb1.brightbox.com,Uid:07293d87a954ad5874790c4ea094ae13,Namespace:kube-system,Attempt:0,}" Jan 19 13:04:30.792787 kubelet[2559]: E0119 13:04:30.792704 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hsmf0.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.46:6443: connect: connection refused" interval="800ms" Jan 19 13:04:30.836217 containerd[1646]: time="2026-01-19T13:04:30.836117225Z" level=info msg="connecting to shim 069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8" address="unix:///run/containerd/s/f65245cc4e5313868e3cbb0d58e895fb538d88894a5fd2c3fd5c6c46cb1c3b3c" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:04:30.838998 containerd[1646]: time="2026-01-19T13:04:30.838961975Z" level=info msg="connecting to shim ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747" address="unix:///run/containerd/s/b8fb61be6ca9f4864fc29919ee8215cadeca294c089ee2de465e06e76b45a36b" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:04:30.843760 containerd[1646]: time="2026-01-19T13:04:30.843695162Z" level=info msg="connecting to shim 9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9" address="unix:///run/containerd/s/ab9bce6fd97274f6c7fa5212f1c8a80416aedf9d2361ea2f0e879dd050bdbdec" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:04:30.980141 systemd[1]: Started cri-containerd-069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8.scope - libcontainer container 069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8. Jan 19 13:04:30.984607 systemd[1]: Started cri-containerd-9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9.scope - libcontainer container 9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9. Jan 19 13:04:30.988968 systemd[1]: Started cri-containerd-ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747.scope - libcontainer container ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747. Jan 19 13:04:31.011950 kubelet[2559]: I0119 13:04:31.011674 2559 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:31.015935 kubelet[2559]: E0119 13:04:31.014755 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.74.46:6443/api/v1/nodes\": dial tcp 10.243.74.46:6443: connect: connection refused" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:31.030000 audit: BPF prog-id=86 op=LOAD Jan 19 13:04:31.035000 audit: BPF prog-id=87 op=LOAD Jan 19 13:04:31.036000 audit: BPF prog-id=88 op=LOAD Jan 19 13:04:31.036000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.037000 audit: BPF prog-id=88 op=UNLOAD Jan 19 13:04:31.037000 audit: BPF prog-id=89 op=LOAD Jan 19 13:04:31.037000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.037000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.038000 audit: BPF prog-id=89 op=UNLOAD Jan 19 13:04:31.038000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.038000 audit: BPF prog-id=90 op=LOAD Jan 19 13:04:31.038000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.039000 audit: BPF prog-id=91 op=LOAD Jan 19 13:04:31.039000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.039000 audit: BPF prog-id=92 op=LOAD Jan 19 13:04:31.039000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.039000 audit: BPF prog-id=92 op=UNLOAD Jan 19 13:04:31.039000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.039000 audit: BPF prog-id=90 op=UNLOAD Jan 19 13:04:31.039000 audit[2652]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.039000 audit: BPF prog-id=93 op=LOAD Jan 19 13:04:31.039000 audit[2652]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2617 pid=2652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036396234383931303035386331653063663036356162313031623961 Jan 19 13:04:31.040000 audit: BPF prog-id=94 op=LOAD Jan 19 13:04:31.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.040000 audit: BPF prog-id=95 op=LOAD Jan 19 13:04:31.040000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.041000 audit: BPF prog-id=95 op=UNLOAD Jan 19 13:04:31.041000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.041000 audit: BPF prog-id=91 op=UNLOAD Jan 19 13:04:31.041000 audit[2644]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.041000 audit: BPF prog-id=96 op=LOAD Jan 19 13:04:31.041000 audit[2644]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2619 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.041000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563643035643063643661386538666663333363373436303462396430 Jan 19 13:04:31.043000 audit: BPF prog-id=97 op=LOAD Jan 19 13:04:31.043000 audit[2654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=97 op=UNLOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=98 op=LOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=99 op=LOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=99 op=UNLOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=98 op=UNLOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.044000 audit: BPF prog-id=100 op=LOAD Jan 19 13:04:31.044000 audit[2654]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2618 pid=2654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3961613138663963326565316438656263306637323230666365386561 Jan 19 13:04:31.154729 containerd[1646]: time="2026-01-19T13:04:31.154664934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hsmf0.gb1.brightbox.com,Uid:07293d87a954ad5874790c4ea094ae13,Namespace:kube-system,Attempt:0,} returns sandbox id \"069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8\"" Jan 19 13:04:31.171020 kubelet[2559]: W0119 13:04:31.170922 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.243.74.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hsmf0.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:31.171192 kubelet[2559]: E0119 13:04:31.171031 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.243.74.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hsmf0.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:31.172864 containerd[1646]: time="2026-01-19T13:04:31.172449287Z" level=info msg="CreateContainer within sandbox \"069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 19 13:04:31.179679 containerd[1646]: time="2026-01-19T13:04:31.179634778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hsmf0.gb1.brightbox.com,Uid:6bd5be42d6aec9c04dbc44183a216064,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747\"" Jan 19 13:04:31.185736 containerd[1646]: time="2026-01-19T13:04:31.184497558Z" level=info msg="CreateContainer within sandbox \"ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 19 13:04:31.189513 containerd[1646]: time="2026-01-19T13:04:31.189480520Z" level=info msg="Container f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:04:31.195035 containerd[1646]: time="2026-01-19T13:04:31.194991482Z" level=info msg="Container dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:04:31.197219 containerd[1646]: time="2026-01-19T13:04:31.197187579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hsmf0.gb1.brightbox.com,Uid:3f368ea403e060ce797658771165efe9,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9\"" Jan 19 13:04:31.200871 containerd[1646]: time="2026-01-19T13:04:31.200839882Z" level=info msg="CreateContainer within sandbox \"9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 19 13:04:31.204583 containerd[1646]: time="2026-01-19T13:04:31.204515073Z" level=info msg="CreateContainer within sandbox \"069b48910058c1e0cf065ab101b9a0eea13687d102b788993dc19a66fb6c21f8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900\"" Jan 19 13:04:31.206995 containerd[1646]: time="2026-01-19T13:04:31.206962605Z" level=info msg="CreateContainer within sandbox \"ecd05d0cd6a8e8ffc33c74604b9d0f4505ce1e0c14ad13ace686227b88ab1747\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c\"" Jan 19 13:04:31.207282 containerd[1646]: time="2026-01-19T13:04:31.207231219Z" level=info msg="StartContainer for \"f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900\"" Jan 19 13:04:31.207764 containerd[1646]: time="2026-01-19T13:04:31.207734782Z" level=info msg="StartContainer for \"dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c\"" Jan 19 13:04:31.209305 containerd[1646]: time="2026-01-19T13:04:31.209273324Z" level=info msg="connecting to shim dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c" address="unix:///run/containerd/s/b8fb61be6ca9f4864fc29919ee8215cadeca294c089ee2de465e06e76b45a36b" protocol=ttrpc version=3 Jan 19 13:04:31.209831 containerd[1646]: time="2026-01-19T13:04:31.209295480Z" level=info msg="connecting to shim f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900" address="unix:///run/containerd/s/f65245cc4e5313868e3cbb0d58e895fb538d88894a5fd2c3fd5c6c46cb1c3b3c" protocol=ttrpc version=3 Jan 19 13:04:31.217487 containerd[1646]: time="2026-01-19T13:04:31.217442720Z" level=info msg="Container 53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:04:31.224557 containerd[1646]: time="2026-01-19T13:04:31.224511129Z" level=info msg="CreateContainer within sandbox \"9aa18f9c2ee1d8ebc0f7220fce8ea1be2bdc40d4f501f3fad94e56a20cedf3e9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c\"" Jan 19 13:04:31.225425 containerd[1646]: time="2026-01-19T13:04:31.225395891Z" level=info msg="StartContainer for \"53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c\"" Jan 19 13:04:31.228664 containerd[1646]: time="2026-01-19T13:04:31.228615566Z" level=info msg="connecting to shim 53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c" address="unix:///run/containerd/s/ab9bce6fd97274f6c7fa5212f1c8a80416aedf9d2361ea2f0e879dd050bdbdec" protocol=ttrpc version=3 Jan 19 13:04:31.254342 systemd[1]: Started cri-containerd-dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c.scope - libcontainer container dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c. Jan 19 13:04:31.266254 systemd[1]: Started cri-containerd-f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900.scope - libcontainer container f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900. Jan 19 13:04:31.304368 systemd[1]: Started cri-containerd-53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c.scope - libcontainer container 53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c. Jan 19 13:04:31.309000 audit: BPF prog-id=101 op=LOAD Jan 19 13:04:31.311000 audit: BPF prog-id=102 op=LOAD Jan 19 13:04:31.311000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.311000 audit: BPF prog-id=102 op=UNLOAD Jan 19 13:04:31.311000 audit[2732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.312000 audit: BPF prog-id=103 op=LOAD Jan 19 13:04:31.312000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.312000 audit: BPF prog-id=104 op=LOAD Jan 19 13:04:31.312000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.312000 audit: BPF prog-id=104 op=UNLOAD Jan 19 13:04:31.312000 audit[2732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.312000 audit: BPF prog-id=103 op=UNLOAD Jan 19 13:04:31.312000 audit[2732]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.312000 audit: BPF prog-id=105 op=LOAD Jan 19 13:04:31.312000 audit[2732]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2619 pid=2732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466616361303261306631623563343938653632313532616466636437 Jan 19 13:04:31.325000 audit: BPF prog-id=106 op=LOAD Jan 19 13:04:31.326000 audit: BPF prog-id=107 op=LOAD Jan 19 13:04:31.326000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.326000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=107 op=UNLOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=108 op=LOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=109 op=LOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=109 op=UNLOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=108 op=UNLOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.327000 audit: BPF prog-id=110 op=LOAD Jan 19 13:04:31.327000 audit[2733]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2617 pid=2733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632633264326463653063376433366262373138373462623731346266 Jan 19 13:04:31.371000 audit: BPF prog-id=111 op=LOAD Jan 19 13:04:31.372000 audit: BPF prog-id=112 op=LOAD Jan 19 13:04:31.372000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.372000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.373000 audit: BPF prog-id=112 op=UNLOAD Jan 19 13:04:31.373000 audit[2748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.374000 audit: BPF prog-id=113 op=LOAD Jan 19 13:04:31.374000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.375000 audit: BPF prog-id=114 op=LOAD Jan 19 13:04:31.375000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.379000 audit: BPF prog-id=114 op=UNLOAD Jan 19 13:04:31.379000 audit[2748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.380000 audit: BPF prog-id=113 op=UNLOAD Jan 19 13:04:31.380000 audit[2748]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.383657 kubelet[2559]: W0119 13:04:31.383482 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.243.74.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:31.383884 kubelet[2559]: E0119 13:04:31.383832 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.243.74.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:31.382000 audit: BPF prog-id=115 op=LOAD Jan 19 13:04:31.382000 audit[2748]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2618 pid=2748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:31.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533646434373662653339366661316362316135383535353635646265 Jan 19 13:04:31.421984 containerd[1646]: time="2026-01-19T13:04:31.421864435Z" level=info msg="StartContainer for \"dfaca02a0f1b5c498e62152adfcd75fc7837612ad75655392be1c4e4e8ed6f7c\" returns successfully" Jan 19 13:04:31.430532 containerd[1646]: time="2026-01-19T13:04:31.430468134Z" level=info msg="StartContainer for \"f2c2d2dce0c7d36bb71874bb714bfe1c079dd9880444efb8984db3ce16c51900\" returns successfully" Jan 19 13:04:31.465095 containerd[1646]: time="2026-01-19T13:04:31.465044553Z" level=info msg="StartContainer for \"53dd476be396fa1cb1a5855565dbe2cdc24e217697f753518c7af89e10fe8d1c\" returns successfully" Jan 19 13:04:31.542894 kubelet[2559]: W0119 13:04:31.541915 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.243.74.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:31.542894 kubelet[2559]: E0119 13:04:31.542001 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.243.74.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:31.593643 kubelet[2559]: E0119 13:04:31.593565 2559 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.243.74.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hsmf0.gb1.brightbox.com?timeout=10s\": dial tcp 10.243.74.46:6443: connect: connection refused" interval="1.6s" Jan 19 13:04:31.687341 kubelet[2559]: W0119 13:04:31.687112 2559 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.243.74.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.243.74.46:6443: connect: connection refused Jan 19 13:04:31.687341 kubelet[2559]: E0119 13:04:31.687174 2559 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.243.74.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.243.74.46:6443: connect: connection refused" logger="UnhandledError" Jan 19 13:04:31.823054 kubelet[2559]: I0119 13:04:31.822408 2559 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:31.824896 kubelet[2559]: E0119 13:04:31.824855 2559 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.243.74.46:6443/api/v1/nodes\": dial tcp 10.243.74.46:6443: connect: connection refused" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:32.322545 kubelet[2559]: E0119 13:04:32.322372 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:32.329837 kubelet[2559]: E0119 13:04:32.326809 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:32.335568 kubelet[2559]: E0119 13:04:32.335528 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:33.338211 kubelet[2559]: E0119 13:04:33.338168 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:33.338802 kubelet[2559]: E0119 13:04:33.338758 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:33.339696 kubelet[2559]: E0119 13:04:33.339653 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:33.430270 kubelet[2559]: I0119 13:04:33.430235 2559 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.347352 kubelet[2559]: E0119 13:04:34.347286 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.349099 kubelet[2559]: E0119 13:04:34.348324 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.349339 kubelet[2559]: E0119 13:04:34.349312 2559 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.512732 kubelet[2559]: E0119 13:04:34.512673 2559 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-hsmf0.gb1.brightbox.com\" not found" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.540862 kubelet[2559]: E0119 13:04:34.540702 2559 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hsmf0.gb1.brightbox.com.188c23938621a6ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hsmf0.gb1.brightbox.com,UID:srv-hsmf0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hsmf0.gb1.brightbox.com,},FirstTimestamp:2026-01-19 13:04:30.167279342 +0000 UTC m=+0.836653558,LastTimestamp:2026-01-19 13:04:30.167279342 +0000 UTC m=+0.836653558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hsmf0.gb1.brightbox.com,}" Jan 19 13:04:34.594837 kubelet[2559]: I0119 13:04:34.594432 2559 kubelet_node_status.go:78] "Successfully registered node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.594837 kubelet[2559]: E0119 13:04:34.594485 2559 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-hsmf0.gb1.brightbox.com\": node \"srv-hsmf0.gb1.brightbox.com\" not found" Jan 19 13:04:34.597645 kubelet[2559]: E0119 13:04:34.597166 2559 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hsmf0.gb1.brightbox.com.188c239387aeab2f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hsmf0.gb1.brightbox.com,UID:srv-hsmf0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-hsmf0.gb1.brightbox.com,},FirstTimestamp:2026-01-19 13:04:30.193298223 +0000 UTC m=+0.862672439,LastTimestamp:2026-01-19 13:04:30.193298223 +0000 UTC m=+0.862672439,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hsmf0.gb1.brightbox.com,}" Jan 19 13:04:34.662417 kubelet[2559]: E0119 13:04:34.662241 2559 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-hsmf0.gb1.brightbox.com.188c239389c10297 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hsmf0.gb1.brightbox.com,UID:srv-hsmf0.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node srv-hsmf0.gb1.brightbox.com status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:srv-hsmf0.gb1.brightbox.com,},FirstTimestamp:2026-01-19 13:04:30.228054679 +0000 UTC m=+0.897428894,LastTimestamp:2026-01-19 13:04:30.228054679 +0000 UTC m=+0.897428894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hsmf0.gb1.brightbox.com,}" Jan 19 13:04:34.691591 kubelet[2559]: I0119 13:04:34.691519 2559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.719299 kubelet[2559]: E0119 13:04:34.719230 2559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.719299 kubelet[2559]: I0119 13:04:34.719297 2559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.721445 kubelet[2559]: E0119 13:04:34.721413 2559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.721548 kubelet[2559]: I0119 13:04:34.721448 2559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:34.723537 kubelet[2559]: E0119 13:04:34.723511 2559 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hsmf0.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:35.163984 kubelet[2559]: I0119 13:04:35.163515 2559 apiserver.go:52] "Watching apiserver" Jan 19 13:04:35.191983 kubelet[2559]: I0119 13:04:35.191949 2559 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 19 13:04:35.347011 kubelet[2559]: I0119 13:04:35.346918 2559 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:35.356685 kubelet[2559]: W0119 13:04:35.356346 2559 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 19 13:04:36.840476 systemd[1]: Reload requested from client PID 2827 ('systemctl') (unit session-10.scope)... Jan 19 13:04:36.840505 systemd[1]: Reloading... Jan 19 13:04:36.999949 zram_generator::config[2877]: No configuration found. Jan 19 13:04:37.371749 systemd[1]: Reloading finished in 530 ms. Jan 19 13:04:37.431338 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:37.449451 systemd[1]: kubelet.service: Deactivated successfully. Jan 19 13:04:37.450082 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:37.450232 systemd[1]: kubelet.service: Consumed 1.447s CPU time, 129.7M memory peak. Jan 19 13:04:37.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:37.457650 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 19 13:04:37.457987 kernel: audit: type=1131 audit(1768827877.449:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:37.467289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 19 13:04:37.467000 audit: BPF prog-id=116 op=LOAD Jan 19 13:04:37.469893 kernel: audit: type=1334 audit(1768827877.467:399): prog-id=116 op=LOAD Jan 19 13:04:37.467000 audit: BPF prog-id=75 op=UNLOAD Jan 19 13:04:37.472841 kernel: audit: type=1334 audit(1768827877.467:400): prog-id=75 op=UNLOAD Jan 19 13:04:37.467000 audit: BPF prog-id=117 op=LOAD Jan 19 13:04:37.474849 kernel: audit: type=1334 audit(1768827877.467:401): prog-id=117 op=LOAD Jan 19 13:04:37.467000 audit: BPF prog-id=118 op=LOAD Jan 19 13:04:37.477854 kernel: audit: type=1334 audit(1768827877.467:402): prog-id=118 op=LOAD Jan 19 13:04:37.467000 audit: BPF prog-id=76 op=UNLOAD Jan 19 13:04:37.480839 kernel: audit: type=1334 audit(1768827877.467:403): prog-id=76 op=UNLOAD Jan 19 13:04:37.485278 kernel: audit: type=1334 audit(1768827877.467:404): prog-id=77 op=UNLOAD Jan 19 13:04:37.485339 kernel: audit: type=1334 audit(1768827877.470:405): prog-id=119 op=LOAD Jan 19 13:04:37.467000 audit: BPF prog-id=77 op=UNLOAD Jan 19 13:04:37.470000 audit: BPF prog-id=119 op=LOAD Jan 19 13:04:37.470000 audit: BPF prog-id=83 op=UNLOAD Jan 19 13:04:37.488842 kernel: audit: type=1334 audit(1768827877.470:406): prog-id=83 op=UNLOAD Jan 19 13:04:37.470000 audit: BPF prog-id=120 op=LOAD Jan 19 13:04:37.471000 audit: BPF prog-id=121 op=LOAD Jan 19 13:04:37.471000 audit: BPF prog-id=84 op=UNLOAD Jan 19 13:04:37.490857 kernel: audit: type=1334 audit(1768827877.470:407): prog-id=120 op=LOAD Jan 19 13:04:37.471000 audit: BPF prog-id=85 op=UNLOAD Jan 19 13:04:37.474000 audit: BPF prog-id=122 op=LOAD Jan 19 13:04:37.474000 audit: BPF prog-id=80 op=UNLOAD Jan 19 13:04:37.474000 audit: BPF prog-id=123 op=LOAD Jan 19 13:04:37.474000 audit: BPF prog-id=124 op=LOAD Jan 19 13:04:37.474000 audit: BPF prog-id=81 op=UNLOAD Jan 19 13:04:37.474000 audit: BPF prog-id=82 op=UNLOAD Jan 19 13:04:37.475000 audit: BPF prog-id=125 op=LOAD Jan 19 13:04:37.475000 audit: BPF prog-id=74 op=UNLOAD Jan 19 13:04:37.477000 audit: BPF prog-id=126 op=LOAD Jan 19 13:04:37.477000 audit: BPF prog-id=79 op=UNLOAD Jan 19 13:04:37.478000 audit: BPF prog-id=127 op=LOAD Jan 19 13:04:37.478000 audit: BPF prog-id=70 op=UNLOAD Jan 19 13:04:37.478000 audit: BPF prog-id=128 op=LOAD Jan 19 13:04:37.478000 audit: BPF prog-id=129 op=LOAD Jan 19 13:04:37.478000 audit: BPF prog-id=71 op=UNLOAD Jan 19 13:04:37.478000 audit: BPF prog-id=72 op=UNLOAD Jan 19 13:04:37.480000 audit: BPF prog-id=130 op=LOAD Jan 19 13:04:37.480000 audit: BPF prog-id=131 op=LOAD Jan 19 13:04:37.480000 audit: BPF prog-id=68 op=UNLOAD Jan 19 13:04:37.480000 audit: BPF prog-id=69 op=UNLOAD Jan 19 13:04:37.482000 audit: BPF prog-id=132 op=LOAD Jan 19 13:04:37.482000 audit: BPF prog-id=78 op=UNLOAD Jan 19 13:04:37.484000 audit: BPF prog-id=133 op=LOAD Jan 19 13:04:37.484000 audit: BPF prog-id=65 op=UNLOAD Jan 19 13:04:37.484000 audit: BPF prog-id=134 op=LOAD Jan 19 13:04:37.484000 audit: BPF prog-id=135 op=LOAD Jan 19 13:04:37.484000 audit: BPF prog-id=66 op=UNLOAD Jan 19 13:04:37.484000 audit: BPF prog-id=67 op=UNLOAD Jan 19 13:04:37.486000 audit: BPF prog-id=136 op=LOAD Jan 19 13:04:37.486000 audit: BPF prog-id=73 op=UNLOAD Jan 19 13:04:37.804924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 19 13:04:37.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:37.819092 (kubelet)[2939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 19 13:04:37.920793 kubelet[2939]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 13:04:37.920793 kubelet[2939]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 19 13:04:37.920793 kubelet[2939]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 19 13:04:37.922833 kubelet[2939]: I0119 13:04:37.921796 2939 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 19 13:04:37.938732 kubelet[2939]: I0119 13:04:37.938026 2939 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 19 13:04:37.938732 kubelet[2939]: I0119 13:04:37.938064 2939 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 19 13:04:37.938732 kubelet[2939]: I0119 13:04:37.938546 2939 server.go:954] "Client rotation is on, will bootstrap in background" Jan 19 13:04:37.945143 kubelet[2939]: I0119 13:04:37.945055 2939 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 19 13:04:37.956983 kubelet[2939]: I0119 13:04:37.955582 2939 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 19 13:04:37.977848 kubelet[2939]: I0119 13:04:37.976963 2939 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 19 13:04:37.983726 kubelet[2939]: I0119 13:04:37.983608 2939 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 19 13:04:37.986788 kubelet[2939]: I0119 13:04:37.985876 2939 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 19 13:04:37.986788 kubelet[2939]: I0119 13:04:37.985929 2939 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hsmf0.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 19 13:04:37.986788 kubelet[2939]: I0119 13:04:37.986178 2939 topology_manager.go:138] "Creating topology manager with none policy" Jan 19 13:04:37.986788 kubelet[2939]: I0119 13:04:37.986194 2939 container_manager_linux.go:304] "Creating device plugin manager" Jan 19 13:04:37.987094 kubelet[2939]: I0119 13:04:37.986290 2939 state_mem.go:36] "Initialized new in-memory state store" Jan 19 13:04:37.990863 kubelet[2939]: I0119 13:04:37.989051 2939 kubelet.go:446] "Attempting to sync node with API server" Jan 19 13:04:37.990863 kubelet[2939]: I0119 13:04:37.989118 2939 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 19 13:04:37.990863 kubelet[2939]: I0119 13:04:37.989160 2939 kubelet.go:352] "Adding apiserver pod source" Jan 19 13:04:37.990863 kubelet[2939]: I0119 13:04:37.989205 2939 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 19 13:04:37.992705 kubelet[2939]: I0119 13:04:37.991994 2939 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 19 13:04:37.996596 kubelet[2939]: I0119 13:04:37.996570 2939 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 19 13:04:38.020878 kubelet[2939]: I0119 13:04:38.020838 2939 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 19 13:04:38.021231 kubelet[2939]: I0119 13:04:38.021209 2939 server.go:1287] "Started kubelet" Jan 19 13:04:38.025846 kubelet[2939]: I0119 13:04:38.025773 2939 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 19 13:04:38.028975 kubelet[2939]: I0119 13:04:38.028951 2939 server.go:479] "Adding debug handlers to kubelet server" Jan 19 13:04:38.031025 kubelet[2939]: I0119 13:04:38.030935 2939 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 19 13:04:38.031762 kubelet[2939]: I0119 13:04:38.031503 2939 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 19 13:04:38.033214 kubelet[2939]: I0119 13:04:38.033192 2939 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 19 13:04:38.036167 kubelet[2939]: I0119 13:04:38.035996 2939 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 19 13:04:38.044440 kubelet[2939]: I0119 13:04:38.043695 2939 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 19 13:04:38.044440 kubelet[2939]: I0119 13:04:38.043804 2939 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 19 13:04:38.045200 kubelet[2939]: I0119 13:04:38.044703 2939 reconciler.go:26] "Reconciler: start to sync state" Jan 19 13:04:38.046921 kubelet[2939]: I0119 13:04:38.046875 2939 factory.go:221] Registration of the systemd container factory successfully Jan 19 13:04:38.047097 kubelet[2939]: I0119 13:04:38.047062 2939 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 19 13:04:38.050967 kubelet[2939]: E0119 13:04:38.050457 2939 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 19 13:04:38.053459 kubelet[2939]: I0119 13:04:38.053273 2939 factory.go:221] Registration of the containerd container factory successfully Jan 19 13:04:38.086862 kubelet[2939]: I0119 13:04:38.085264 2939 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 19 13:04:38.091107 kubelet[2939]: I0119 13:04:38.090916 2939 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 19 13:04:38.091107 kubelet[2939]: I0119 13:04:38.090992 2939 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 19 13:04:38.091107 kubelet[2939]: I0119 13:04:38.091049 2939 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 19 13:04:38.091107 kubelet[2939]: I0119 13:04:38.091072 2939 kubelet.go:2382] "Starting kubelet main sync loop" Jan 19 13:04:38.091326 kubelet[2939]: E0119 13:04:38.091207 2939 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 19 13:04:38.164277 kubelet[2939]: I0119 13:04:38.164228 2939 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 19 13:04:38.164277 kubelet[2939]: I0119 13:04:38.164258 2939 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 19 13:04:38.164277 kubelet[2939]: I0119 13:04:38.164283 2939 state_mem.go:36] "Initialized new in-memory state store" Jan 19 13:04:38.164548 kubelet[2939]: I0119 13:04:38.164494 2939 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 19 13:04:38.164548 kubelet[2939]: I0119 13:04:38.164512 2939 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 19 13:04:38.164548 kubelet[2939]: I0119 13:04:38.164541 2939 policy_none.go:49] "None policy: Start" Jan 19 13:04:38.164686 kubelet[2939]: I0119 13:04:38.164563 2939 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 19 13:04:38.164686 kubelet[2939]: I0119 13:04:38.164587 2939 state_mem.go:35] "Initializing new in-memory state store" Jan 19 13:04:38.164761 kubelet[2939]: I0119 13:04:38.164744 2939 state_mem.go:75] "Updated machine memory state" Jan 19 13:04:38.172987 kubelet[2939]: I0119 13:04:38.172957 2939 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 19 13:04:38.175609 kubelet[2939]: I0119 13:04:38.174619 2939 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 19 13:04:38.175609 kubelet[2939]: I0119 13:04:38.174660 2939 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 19 13:04:38.179656 kubelet[2939]: I0119 13:04:38.177863 2939 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 19 13:04:38.190984 kubelet[2939]: E0119 13:04:38.190000 2939 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 19 13:04:38.199531 kubelet[2939]: I0119 13:04:38.198592 2939 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.199531 kubelet[2939]: I0119 13:04:38.199080 2939 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.199531 kubelet[2939]: I0119 13:04:38.199337 2939 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.227056 kubelet[2939]: W0119 13:04:38.226722 2939 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 19 13:04:38.227056 kubelet[2939]: W0119 13:04:38.227004 2939 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 19 13:04:38.227293 kubelet[2939]: E0119 13:04:38.227081 2939 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-hsmf0.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.228794 kubelet[2939]: W0119 13:04:38.228752 2939 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 19 13:04:38.245203 kubelet[2939]: I0119 13:04:38.245159 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-k8s-certs\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.245636 kubelet[2939]: I0119 13:04:38.245391 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07293d87a954ad5874790c4ea094ae13-kubeconfig\") pod \"kube-scheduler-srv-hsmf0.gb1.brightbox.com\" (UID: \"07293d87a954ad5874790c4ea094ae13\") " pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.245636 kubelet[2939]: I0119 13:04:38.245484 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-ca-certs\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.245636 kubelet[2939]: I0119 13:04:38.245528 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.245636 kubelet[2939]: I0119 13:04:38.245557 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-ca-certs\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.246090 kubelet[2939]: I0119 13:04:38.245596 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.246090 kubelet[2939]: I0119 13:04:38.245913 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6bd5be42d6aec9c04dbc44183a216064-k8s-certs\") pod \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" (UID: \"6bd5be42d6aec9c04dbc44183a216064\") " pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.246090 kubelet[2939]: I0119 13:04:38.245946 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-flexvolume-dir\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.246090 kubelet[2939]: I0119 13:04:38.245985 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f368ea403e060ce797658771165efe9-kubeconfig\") pod \"kube-controller-manager-srv-hsmf0.gb1.brightbox.com\" (UID: \"3f368ea403e060ce797658771165efe9\") " pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.301852 kubelet[2939]: I0119 13:04:38.301485 2939 kubelet_node_status.go:75] "Attempting to register node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.319703 kubelet[2939]: I0119 13:04:38.319623 2939 kubelet_node_status.go:124] "Node was previously registered" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.320169 kubelet[2939]: I0119 13:04:38.319739 2939 kubelet_node_status.go:78] "Successfully registered node" node="srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:38.992323 kubelet[2939]: I0119 13:04:38.992174 2939 apiserver.go:52] "Watching apiserver" Jan 19 13:04:39.044551 kubelet[2939]: I0119 13:04:39.044496 2939 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 19 13:04:39.129973 kubelet[2939]: I0119 13:04:39.129936 2939 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:39.138196 kubelet[2939]: W0119 13:04:39.138143 2939 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 19 13:04:39.138364 kubelet[2939]: E0119 13:04:39.138228 2939 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-hsmf0.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" Jan 19 13:04:39.141512 kubelet[2939]: I0119 13:04:39.141380 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-hsmf0.gb1.brightbox.com" podStartSLOduration=1.141352559 podStartE2EDuration="1.141352559s" podCreationTimestamp="2026-01-19 13:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:04:39.140867378 +0000 UTC m=+1.293761999" watchObservedRunningTime="2026-01-19 13:04:39.141352559 +0000 UTC m=+1.294247184" Jan 19 13:04:39.175365 kubelet[2939]: I0119 13:04:39.175078 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-hsmf0.gb1.brightbox.com" podStartSLOduration=1.1750594890000001 podStartE2EDuration="1.175059489s" podCreationTimestamp="2026-01-19 13:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:04:39.155500832 +0000 UTC m=+1.308395460" watchObservedRunningTime="2026-01-19 13:04:39.175059489 +0000 UTC m=+1.327954095" Jan 19 13:04:39.175365 kubelet[2939]: I0119 13:04:39.175244 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-hsmf0.gb1.brightbox.com" podStartSLOduration=4.175235163 podStartE2EDuration="4.175235163s" podCreationTimestamp="2026-01-19 13:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:04:39.171715892 +0000 UTC m=+1.324610518" watchObservedRunningTime="2026-01-19 13:04:39.175235163 +0000 UTC m=+1.328129781" Jan 19 13:04:43.266885 kubelet[2939]: I0119 13:04:43.266749 2939 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 19 13:04:43.269966 kubelet[2939]: I0119 13:04:43.268985 2939 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 19 13:04:43.270057 containerd[1646]: time="2026-01-19T13:04:43.268426337Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 19 13:04:44.211016 systemd[1]: Created slice kubepods-besteffort-poddda9f07d_c66d_436e_a080_2e997c0eaf23.slice - libcontainer container kubepods-besteffort-poddda9f07d_c66d_436e_a080_2e997c0eaf23.slice. Jan 19 13:04:44.289296 kubelet[2939]: I0119 13:04:44.289114 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dda9f07d-c66d-436e-a080-2e997c0eaf23-kube-proxy\") pod \"kube-proxy-759jb\" (UID: \"dda9f07d-c66d-436e-a080-2e997c0eaf23\") " pod="kube-system/kube-proxy-759jb" Jan 19 13:04:44.289296 kubelet[2939]: I0119 13:04:44.289181 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dda9f07d-c66d-436e-a080-2e997c0eaf23-lib-modules\") pod \"kube-proxy-759jb\" (UID: \"dda9f07d-c66d-436e-a080-2e997c0eaf23\") " pod="kube-system/kube-proxy-759jb" Jan 19 13:04:44.289296 kubelet[2939]: I0119 13:04:44.289213 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmb9\" (UniqueName: \"kubernetes.io/projected/dda9f07d-c66d-436e-a080-2e997c0eaf23-kube-api-access-sfmb9\") pod \"kube-proxy-759jb\" (UID: \"dda9f07d-c66d-436e-a080-2e997c0eaf23\") " pod="kube-system/kube-proxy-759jb" Jan 19 13:04:44.289296 kubelet[2939]: I0119 13:04:44.289248 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dda9f07d-c66d-436e-a080-2e997c0eaf23-xtables-lock\") pod \"kube-proxy-759jb\" (UID: \"dda9f07d-c66d-436e-a080-2e997c0eaf23\") " pod="kube-system/kube-proxy-759jb" Jan 19 13:04:44.406619 systemd[1]: Created slice kubepods-besteffort-pod182ae8ff_a35c_4e4f_8302_2d83f537657b.slice - libcontainer container kubepods-besteffort-pod182ae8ff_a35c_4e4f_8302_2d83f537657b.slice. Jan 19 13:04:44.492288 kubelet[2939]: I0119 13:04:44.492043 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/182ae8ff-a35c-4e4f-8302-2d83f537657b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-bhrhz\" (UID: \"182ae8ff-a35c-4e4f-8302-2d83f537657b\") " pod="tigera-operator/tigera-operator-7dcd859c48-bhrhz" Jan 19 13:04:44.492845 kubelet[2939]: I0119 13:04:44.492788 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7lkn\" (UniqueName: \"kubernetes.io/projected/182ae8ff-a35c-4e4f-8302-2d83f537657b-kube-api-access-n7lkn\") pod \"tigera-operator-7dcd859c48-bhrhz\" (UID: \"182ae8ff-a35c-4e4f-8302-2d83f537657b\") " pod="tigera-operator/tigera-operator-7dcd859c48-bhrhz" Jan 19 13:04:44.523093 containerd[1646]: time="2026-01-19T13:04:44.522899918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-759jb,Uid:dda9f07d-c66d-436e-a080-2e997c0eaf23,Namespace:kube-system,Attempt:0,}" Jan 19 13:04:44.562187 containerd[1646]: time="2026-01-19T13:04:44.561929331Z" level=info msg="connecting to shim 5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb" address="unix:///run/containerd/s/ede622eab1621b5f988211cb34ac0c31f335ac58bcdfcff1b9004393d074ce7d" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:04:44.612998 systemd[1]: Started cri-containerd-5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb.scope - libcontainer container 5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb. Jan 19 13:04:44.644000 audit: BPF prog-id=137 op=LOAD Jan 19 13:04:44.650773 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 19 13:04:44.651144 kernel: audit: type=1334 audit(1768827884.644:442): prog-id=137 op=LOAD Jan 19 13:04:44.653000 audit: BPF prog-id=138 op=LOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.658309 kernel: audit: type=1334 audit(1768827884.653:443): prog-id=138 op=LOAD Jan 19 13:04:44.658423 kernel: audit: type=1300 audit(1768827884.653:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.663211 kernel: audit: type=1327 audit(1768827884.653:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.653000 audit: BPF prog-id=138 op=UNLOAD Jan 19 13:04:44.666959 kernel: audit: type=1334 audit(1768827884.653:444): prog-id=138 op=UNLOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.669534 kernel: audit: type=1300 audit(1768827884.653:444): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.677943 kernel: audit: type=1327 audit(1768827884.653:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.678082 kernel: audit: type=1334 audit(1768827884.653:445): prog-id=139 op=LOAD Jan 19 13:04:44.653000 audit: BPF prog-id=139 op=LOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.681083 kernel: audit: type=1300 audit(1768827884.653:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.686291 kernel: audit: type=1327 audit(1768827884.653:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.653000 audit: BPF prog-id=140 op=LOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.653000 audit: BPF prog-id=140 op=UNLOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.653000 audit: BPF prog-id=139 op=UNLOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.653000 audit: BPF prog-id=141 op=LOAD Jan 19 13:04:44.653000 audit[3009]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2998 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3561323163623662356663643934373738366135326433383132663263 Jan 19 13:04:44.714027 containerd[1646]: time="2026-01-19T13:04:44.713745638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bhrhz,Uid:182ae8ff-a35c-4e4f-8302-2d83f537657b,Namespace:tigera-operator,Attempt:0,}" Jan 19 13:04:44.718228 containerd[1646]: time="2026-01-19T13:04:44.718180113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-759jb,Uid:dda9f07d-c66d-436e-a080-2e997c0eaf23,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb\"" Jan 19 13:04:44.726869 containerd[1646]: time="2026-01-19T13:04:44.725367154Z" level=info msg="CreateContainer within sandbox \"5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 19 13:04:44.774680 containerd[1646]: time="2026-01-19T13:04:44.774530217Z" level=info msg="Container c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:04:44.784608 containerd[1646]: time="2026-01-19T13:04:44.784544706Z" level=info msg="connecting to shim 269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c" address="unix:///run/containerd/s/3cce2e6028a822005fb18e79f4bc34fd38c2f052370ed4a39d256ddc2e736e31" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:04:44.787390 containerd[1646]: time="2026-01-19T13:04:44.787298674Z" level=info msg="CreateContainer within sandbox \"5a21cb6b5fcd947786a52d3812f2c3b3a275ad5cfe93cc2c49d3c42a1abfdfeb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87\"" Jan 19 13:04:44.788618 containerd[1646]: time="2026-01-19T13:04:44.788542496Z" level=info msg="StartContainer for \"c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87\"" Jan 19 13:04:44.794651 containerd[1646]: time="2026-01-19T13:04:44.794613443Z" level=info msg="connecting to shim c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87" address="unix:///run/containerd/s/ede622eab1621b5f988211cb34ac0c31f335ac58bcdfcff1b9004393d074ce7d" protocol=ttrpc version=3 Jan 19 13:04:44.838263 systemd[1]: Started cri-containerd-269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c.scope - libcontainer container 269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c. Jan 19 13:04:44.845580 systemd[1]: Started cri-containerd-c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87.scope - libcontainer container c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87. Jan 19 13:04:44.887000 audit: BPF prog-id=142 op=LOAD Jan 19 13:04:44.889000 audit: BPF prog-id=143 op=LOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=143 op=UNLOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=144 op=LOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=145 op=LOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=145 op=UNLOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=144 op=UNLOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.889000 audit: BPF prog-id=146 op=LOAD Jan 19 13:04:44.889000 audit[3059]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3047 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3236393037376538353664303965336534653561316462653530626335 Jan 19 13:04:44.937000 audit: BPF prog-id=147 op=LOAD Jan 19 13:04:44.937000 audit[3052]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=2998 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396633343335646266383039303334666332656337613831316235 Jan 19 13:04:44.938000 audit: BPF prog-id=148 op=LOAD Jan 19 13:04:44.938000 audit[3052]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=2998 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396633343335646266383039303334666332656337613831316235 Jan 19 13:04:44.938000 audit: BPF prog-id=148 op=UNLOAD Jan 19 13:04:44.938000 audit[3052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396633343335646266383039303334666332656337613831316235 Jan 19 13:04:44.938000 audit: BPF prog-id=147 op=UNLOAD Jan 19 13:04:44.938000 audit[3052]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2998 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396633343335646266383039303334666332656337613831316235 Jan 19 13:04:44.938000 audit: BPF prog-id=149 op=LOAD Jan 19 13:04:44.938000 audit[3052]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=2998 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:44.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339396633343335646266383039303334666332656337613831316235 Jan 19 13:04:44.969616 containerd[1646]: time="2026-01-19T13:04:44.969564514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-bhrhz,Uid:182ae8ff-a35c-4e4f-8302-2d83f537657b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c\"" Jan 19 13:04:44.974126 containerd[1646]: time="2026-01-19T13:04:44.974092646Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 19 13:04:45.007408 containerd[1646]: time="2026-01-19T13:04:45.007258260Z" level=info msg="StartContainer for \"c99f3435dbf809034fc2ec7a811b594ca5e2ff7226bb9f768df8f606f3507a87\" returns successfully" Jan 19 13:04:45.450000 audit[3148]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.450000 audit[3148]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffee3222d00 a2=0 a3=7ffee3222cec items=0 ppid=3085 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 13:04:45.452000 audit[3149]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.452000 audit[3149]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2a6bae70 a2=0 a3=7ffc2a6bae5c items=0 ppid=3085 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 13:04:45.454000 audit[3150]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.454000 audit[3150]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff1970b3e0 a2=0 a3=7fff1970b3cc items=0 ppid=3085 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 13:04:45.457000 audit[3151]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.457000 audit[3151]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc33993020 a2=0 a3=7ffc3399300c items=0 ppid=3085 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 19 13:04:45.459000 audit[3152]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.459000 audit[3152]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe11a552c0 a2=0 a3=7ffe11a552ac items=0 ppid=3085 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 19 13:04:45.460000 audit[3154]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3154 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.460000 audit[3154]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffec9073b70 a2=0 a3=7ffec9073b5c items=0 ppid=3085 pid=3154 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 19 13:04:45.567000 audit[3155]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.567000 audit[3155]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffeb9ac5680 a2=0 a3=7ffeb9ac566c items=0 ppid=3085 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 13:04:45.573000 audit[3157]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.573000 audit[3157]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffceb4a7260 a2=0 a3=7ffceb4a724c items=0 ppid=3085 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 19 13:04:45.579000 audit[3160]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.579000 audit[3160]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc699bd7a0 a2=0 a3=7ffc699bd78c items=0 ppid=3085 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 19 13:04:45.581000 audit[3161]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.581000 audit[3161]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff9b515c0 a2=0 a3=7ffff9b515ac items=0 ppid=3085 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 13:04:45.585000 audit[3163]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.585000 audit[3163]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe60933e70 a2=0 a3=7ffe60933e5c items=0 ppid=3085 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 13:04:45.587000 audit[3164]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.587000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3cd254f0 a2=0 a3=7ffe3cd254dc items=0 ppid=3085 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 13:04:45.592000 audit[3166]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3166 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.592000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffefd02990 a2=0 a3=7fffefd0297c items=0 ppid=3085 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.592000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 19 13:04:45.598000 audit[3169]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.598000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffea53e59c0 a2=0 a3=7ffea53e59ac items=0 ppid=3085 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 19 13:04:45.600000 audit[3170]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.600000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa6a96010 a2=0 a3=7fffa6a95ffc items=0 ppid=3085 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.600000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 13:04:45.606000 audit[3172]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.606000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff5465d990 a2=0 a3=7fff5465d97c items=0 ppid=3085 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 13:04:45.608000 audit[3173]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.608000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb74c8410 a2=0 a3=7ffdb74c83fc items=0 ppid=3085 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.608000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 13:04:45.615000 audit[3175]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.615000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff99179bc0 a2=0 a3=7fff99179bac items=0 ppid=3085 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.615000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 13:04:45.622000 audit[3178]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.622000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc551ffbe0 a2=0 a3=7ffc551ffbcc items=0 ppid=3085 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.622000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 13:04:45.628000 audit[3181]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.628000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8f203560 a2=0 a3=7fff8f20354c items=0 ppid=3085 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 19 13:04:45.630000 audit[3182]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.630000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd2a64a940 a2=0 a3=7ffd2a64a92c items=0 ppid=3085 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 13:04:45.635000 audit[3184]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.635000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe5b496be0 a2=0 a3=7ffe5b496bcc items=0 ppid=3085 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 13:04:45.641000 audit[3187]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.641000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee3ceb240 a2=0 a3=7ffee3ceb22c items=0 ppid=3085 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.641000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 13:04:45.643000 audit[3188]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.643000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3973df40 a2=0 a3=7ffe3973df2c items=0 ppid=3085 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.643000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 13:04:45.647000 audit[3190]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 19 13:04:45.647000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe27ea6e60 a2=0 a3=7ffe27ea6e4c items=0 ppid=3085 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.647000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 13:04:45.680000 audit[3196]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:45.680000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdd5d29a20 a2=0 a3=7ffdd5d29a0c items=0 ppid=3085 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:45.689000 audit[3196]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:45.689000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdd5d29a20 a2=0 a3=7ffdd5d29a0c items=0 ppid=3085 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.689000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:45.692000 audit[3201]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.692000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe15066410 a2=0 a3=7ffe150663fc items=0 ppid=3085 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.692000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 19 13:04:45.697000 audit[3203]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.697000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd0d3310b0 a2=0 a3=7ffd0d33109c items=0 ppid=3085 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 19 13:04:45.709000 audit[3206]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.709000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff91ab1010 a2=0 a3=7fff91ab0ffc items=0 ppid=3085 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.709000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 19 13:04:45.714000 audit[3207]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.714000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaee560a0 a2=0 a3=7ffeaee5608c items=0 ppid=3085 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.714000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 19 13:04:45.721000 audit[3209]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.721000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf2779c70 a2=0 a3=7ffcf2779c5c items=0 ppid=3085 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.721000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 19 13:04:45.727000 audit[3210]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.727000 audit[3210]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcfd2571c0 a2=0 a3=7ffcfd2571ac items=0 ppid=3085 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.727000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 19 13:04:45.734000 audit[3212]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.734000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb3660d90 a2=0 a3=7ffcb3660d7c items=0 ppid=3085 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.734000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 19 13:04:45.741000 audit[3215]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3215 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.741000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc3f0c1d00 a2=0 a3=7ffc3f0c1cec items=0 ppid=3085 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.741000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 19 13:04:45.743000 audit[3216]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.743000 audit[3216]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1fe41d60 a2=0 a3=7ffd1fe41d4c items=0 ppid=3085 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.743000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 19 13:04:45.749000 audit[3218]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.749000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdf2f2cc70 a2=0 a3=7ffdf2f2cc5c items=0 ppid=3085 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 19 13:04:45.751000 audit[3219]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.751000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce10e9180 a2=0 a3=7ffce10e916c items=0 ppid=3085 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.751000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 19 13:04:45.755000 audit[3221]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.755000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4e3a2be0 a2=0 a3=7fff4e3a2bcc items=0 ppid=3085 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.755000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 19 13:04:45.762000 audit[3224]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.762000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb91378a0 a2=0 a3=7ffdb913788c items=0 ppid=3085 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.762000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 19 13:04:45.769000 audit[3227]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.769000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa0e4fb20 a2=0 a3=7fffa0e4fb0c items=0 ppid=3085 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.769000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 19 13:04:45.771000 audit[3228]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.771000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc4b4125c0 a2=0 a3=7ffc4b4125ac items=0 ppid=3085 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.771000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 19 13:04:45.775000 audit[3230]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.775000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffc2baeabe0 a2=0 a3=7ffc2baeabcc items=0 ppid=3085 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.775000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 13:04:45.782000 audit[3233]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.782000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe9394b370 a2=0 a3=7ffe9394b35c items=0 ppid=3085 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 19 13:04:45.784000 audit[3234]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.784000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8fb965f0 a2=0 a3=7ffc8fb965dc items=0 ppid=3085 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.784000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 19 13:04:45.788000 audit[3236]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.788000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffece127470 a2=0 a3=7ffece12745c items=0 ppid=3085 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.788000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 19 13:04:45.790000 audit[3237]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.790000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe67fcde10 a2=0 a3=7ffe67fcddfc items=0 ppid=3085 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 19 13:04:45.794000 audit[3239]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.794000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc1f802f0 a2=0 a3=7ffdc1f802dc items=0 ppid=3085 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 13:04:45.800000 audit[3242]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 19 13:04:45.800000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffebb8bd130 a2=0 a3=7ffebb8bd11c items=0 ppid=3085 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 19 13:04:45.808000 audit[3244]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 13:04:45.808000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe5f77f490 a2=0 a3=7ffe5f77f47c items=0 ppid=3085 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.808000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:45.809000 audit[3244]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 19 13:04:45.809000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe5f77f490 a2=0 a3=7ffe5f77f47c items=0 ppid=3085 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:45.809000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:46.739948 kubelet[2939]: I0119 13:04:46.739594 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-759jb" podStartSLOduration=2.73956963 podStartE2EDuration="2.73956963s" podCreationTimestamp="2026-01-19 13:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:04:45.167241751 +0000 UTC m=+7.320136374" watchObservedRunningTime="2026-01-19 13:04:46.73956963 +0000 UTC m=+8.892464241" Jan 19 13:04:48.363957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4031126217.mount: Deactivated successfully. Jan 19 13:04:49.746096 containerd[1646]: time="2026-01-19T13:04:49.746038387Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:49.747544 containerd[1646]: time="2026-01-19T13:04:49.747293157Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 19 13:04:49.748320 containerd[1646]: time="2026-01-19T13:04:49.748280435Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:49.750960 containerd[1646]: time="2026-01-19T13:04:49.750922533Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:04:49.752319 containerd[1646]: time="2026-01-19T13:04:49.752273745Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.77813934s" Jan 19 13:04:49.752455 containerd[1646]: time="2026-01-19T13:04:49.752428102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 19 13:04:49.756994 containerd[1646]: time="2026-01-19T13:04:49.756963773Z" level=info msg="CreateContainer within sandbox \"269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 19 13:04:49.771555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1938106739.mount: Deactivated successfully. Jan 19 13:04:49.773232 containerd[1646]: time="2026-01-19T13:04:49.772371670Z" level=info msg="Container bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:04:49.808439 containerd[1646]: time="2026-01-19T13:04:49.808386219Z" level=info msg="CreateContainer within sandbox \"269077e856d09e3e4e5a1dbe50bc5b5d31178c4844fef57dbb42d2af1c92f41c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a\"" Jan 19 13:04:49.809938 containerd[1646]: time="2026-01-19T13:04:49.809903045Z" level=info msg="StartContainer for \"bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a\"" Jan 19 13:04:49.812475 containerd[1646]: time="2026-01-19T13:04:49.812410115Z" level=info msg="connecting to shim bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a" address="unix:///run/containerd/s/3cce2e6028a822005fb18e79f4bc34fd38c2f052370ed4a39d256ddc2e736e31" protocol=ttrpc version=3 Jan 19 13:04:49.853061 systemd[1]: Started cri-containerd-bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a.scope - libcontainer container bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a. Jan 19 13:04:49.882871 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 19 13:04:49.883061 kernel: audit: type=1334 audit(1768827889.874:514): prog-id=150 op=LOAD Jan 19 13:04:49.874000 audit: BPF prog-id=150 op=LOAD Jan 19 13:04:49.884000 audit: BPF prog-id=151 op=LOAD Jan 19 13:04:49.884000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.889371 kernel: audit: type=1334 audit(1768827889.884:515): prog-id=151 op=LOAD Jan 19 13:04:49.889443 kernel: audit: type=1300 audit(1768827889.884:515): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.894395 kernel: audit: type=1327 audit(1768827889.884:515): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.886000 audit: BPF prog-id=151 op=UNLOAD Jan 19 13:04:49.898082 kernel: audit: type=1334 audit(1768827889.886:516): prog-id=151 op=UNLOAD Jan 19 13:04:49.886000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.900663 kernel: audit: type=1300 audit(1768827889.886:516): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.905518 kernel: audit: type=1327 audit(1768827889.886:516): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.886000 audit: BPF prog-id=152 op=LOAD Jan 19 13:04:49.909169 kernel: audit: type=1334 audit(1768827889.886:517): prog-id=152 op=LOAD Jan 19 13:04:49.886000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.911919 kernel: audit: type=1300 audit(1768827889.886:517): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.917459 kernel: audit: type=1327 audit(1768827889.886:517): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.886000 audit: BPF prog-id=153 op=LOAD Jan 19 13:04:49.886000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.886000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.888000 audit: BPF prog-id=153 op=UNLOAD Jan 19 13:04:49.888000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.888000 audit: BPF prog-id=152 op=UNLOAD Jan 19 13:04:49.888000 audit[3253]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.888000 audit: BPF prog-id=154 op=LOAD Jan 19 13:04:49.888000 audit[3253]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3047 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:49.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266333837363730383665376134373936396131396436383762623066 Jan 19 13:04:49.950851 containerd[1646]: time="2026-01-19T13:04:49.950537924Z" level=info msg="StartContainer for \"bf38767086e7a47969a19d687bb0f269a485ca29157fe7aa6fa8c82adec1958a\" returns successfully" Jan 19 13:04:50.191938 kubelet[2939]: I0119 13:04:50.191467 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-bhrhz" podStartSLOduration=1.409959456 podStartE2EDuration="6.191440011s" podCreationTimestamp="2026-01-19 13:04:44 +0000 UTC" firstStartedPulling="2026-01-19 13:04:44.971897529 +0000 UTC m=+7.124792134" lastFinishedPulling="2026-01-19 13:04:49.753378072 +0000 UTC m=+11.906272689" observedRunningTime="2026-01-19 13:04:50.190680398 +0000 UTC m=+12.343575024" watchObservedRunningTime="2026-01-19 13:04:50.191440011 +0000 UTC m=+12.344334620" Jan 19 13:04:52.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.243.74.46:22-188.166.92.220:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:52.988470 systemd[1]: Started sshd@9-10.243.74.46:22-188.166.92.220:38470.service - OpenSSH per-connection server daemon (188.166.92.220:38470). Jan 19 13:04:53.349312 sshd[3286]: Connection closed by authenticating user root 188.166.92.220 port 38470 [preauth] Jan 19 13:04:53.351000 audit[3286]: USER_ERR pid=3286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:04:53.354484 systemd[1]: sshd@9-10.243.74.46:22-188.166.92.220:38470.service: Deactivated successfully. Jan 19 13:04:53.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.243.74.46:22-188.166.92.220:38470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.599587 sudo[1938]: pam_unix(sudo:session): session closed for user root Jan 19 13:04:57.608349 kernel: kauditd_printk_skb: 15 callbacks suppressed Jan 19 13:04:57.608431 kernel: audit: type=1106 audit(1768827897.598:525): pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.598000 audit[1938]: USER_END pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.625839 kernel: audit: type=1104 audit(1768827897.618:526): pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.618000 audit[1938]: CRED_DISP pid=1938 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.718849 sshd[1937]: Connection closed by 68.220.241.50 port 46368 Jan 19 13:04:57.719990 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Jan 19 13:04:57.732847 kernel: audit: type=1106 audit(1768827897.725:527): pid=1933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:04:57.725000 audit[1933]: USER_END pid=1933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:04:57.725000 audit[1933]: CRED_DISP pid=1933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:04:57.744085 kernel: audit: type=1104 audit(1768827897.725:528): pid=1933 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:04:57.745336 systemd[1]: sshd@7-10.243.74.46:22-68.220.241.50:46368.service: Deactivated successfully. Jan 19 13:04:57.751983 systemd[1]: session-10.scope: Deactivated successfully. Jan 19 13:04:57.752728 systemd[1]: session-10.scope: Consumed 6.846s CPU time, 156.3M memory peak. Jan 19 13:04:57.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.243.74.46:22-68.220.241.50:46368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.759080 kernel: audit: type=1131 audit(1768827897.745:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.243.74.46:22-68.220.241.50:46368 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:04:57.761104 systemd-logind[1621]: Session 10 logged out. Waiting for processes to exit. Jan 19 13:04:57.767429 systemd-logind[1621]: Removed session 10. Jan 19 13:04:58.634848 kernel: audit: type=1325 audit(1768827898.622:530): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.622000 audit[3341]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.622000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe578f4cf0 a2=0 a3=7ffe578f4cdc items=0 ppid=3085 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.647148 kernel: audit: type=1300 audit(1768827898.622:530): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe578f4cf0 a2=0 a3=7ffe578f4cdc items=0 ppid=3085 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:58.654835 kernel: audit: type=1327 audit(1768827898.622:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:58.658845 kernel: audit: type=1325 audit(1768827898.639:531): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.639000 audit[3341]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.639000 audit[3341]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe578f4cf0 a2=0 a3=0 items=0 ppid=3085 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.666838 kernel: audit: type=1300 audit(1768827898.639:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe578f4cf0 a2=0 a3=0 items=0 ppid=3085 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.639000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:58.675000 audit[3343]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.675000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc52840710 a2=0 a3=7ffc528406fc items=0 ppid=3085 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:04:58.680000 audit[3343]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3343 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:04:58.680000 audit[3343]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc52840710 a2=0 a3=0 items=0 ppid=3085 pid=3343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:04:58.680000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:01.896000 audit[3347]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:01.896000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffcf738d5d0 a2=0 a3=7ffcf738d5bc items=0 ppid=3085 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:01.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:01.903000 audit[3347]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3347 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:01.903000 audit[3347]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcf738d5d0 a2=0 a3=0 items=0 ppid=3085 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:01.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:01.945000 audit[3349]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:01.945000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffeab991550 a2=0 a3=7ffeab99153c items=0 ppid=3085 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:01.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:01.950000 audit[3349]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3349 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:01.950000 audit[3349]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeab991550 a2=0 a3=0 items=0 ppid=3085 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:01.950000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:02.964000 audit[3351]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:02.974788 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 19 13:05:02.974922 kernel: audit: type=1325 audit(1768827902.964:538): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:02.964000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcf14fe70 a2=0 a3=7fffcf14fe5c items=0 ppid=3085 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:02.987187 kernel: audit: type=1300 audit(1768827902.964:538): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffcf14fe70 a2=0 a3=7fffcf14fe5c items=0 ppid=3085 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:02.964000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:02.992844 kernel: audit: type=1327 audit(1768827902.964:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:03.000000 audit[3351]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:03.000000 audit[3351]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcf14fe70 a2=0 a3=0 items=0 ppid=3085 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:03.006227 kernel: audit: type=1325 audit(1768827903.000:539): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3351 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:03.006305 kernel: audit: type=1300 audit(1768827903.000:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffcf14fe70 a2=0 a3=0 items=0 ppid=3085 pid=3351 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:03.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:03.013869 kernel: audit: type=1327 audit(1768827903.000:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:03.943382 systemd[1]: Created slice kubepods-besteffort-pod741b8339_8e95_4f63_8b40_fa832cf29730.slice - libcontainer container kubepods-besteffort-pod741b8339_8e95_4f63_8b40_fa832cf29730.slice. Jan 19 13:05:04.004000 audit[3353]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:04.013711 kernel: audit: type=1325 audit(1768827904.004:540): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:04.014298 kernel: audit: type=1300 audit(1768827904.004:540): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd5bdb5610 a2=0 a3=7ffd5bdb55fc items=0 ppid=3085 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.004000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd5bdb5610 a2=0 a3=7ffd5bdb55fc items=0 ppid=3085 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:04.029697 kubelet[2939]: I0119 13:05:04.029596 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/741b8339-8e95-4f63-8b40-fa832cf29730-tigera-ca-bundle\") pod \"calico-typha-54bf57c668-7jfnt\" (UID: \"741b8339-8e95-4f63-8b40-fa832cf29730\") " pod="calico-system/calico-typha-54bf57c668-7jfnt" Jan 19 13:05:04.029697 kubelet[2939]: I0119 13:05:04.029680 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bs9\" (UniqueName: \"kubernetes.io/projected/741b8339-8e95-4f63-8b40-fa832cf29730-kube-api-access-r6bs9\") pod \"calico-typha-54bf57c668-7jfnt\" (UID: \"741b8339-8e95-4f63-8b40-fa832cf29730\") " pod="calico-system/calico-typha-54bf57c668-7jfnt" Jan 19 13:05:04.031230 kernel: audit: type=1327 audit(1768827904.004:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:04.031302 kubelet[2939]: I0119 13:05:04.029713 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/741b8339-8e95-4f63-8b40-fa832cf29730-typha-certs\") pod \"calico-typha-54bf57c668-7jfnt\" (UID: \"741b8339-8e95-4f63-8b40-fa832cf29730\") " pod="calico-system/calico-typha-54bf57c668-7jfnt" Jan 19 13:05:04.018000 audit[3353]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:04.018000 audit[3353]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd5bdb5610 a2=0 a3=0 items=0 ppid=3085 pid=3353 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.018000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:04.035879 kernel: audit: type=1325 audit(1768827904.018:541): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3353 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:04.115153 systemd[1]: Created slice kubepods-besteffort-pod49d857ea_b436_466c_8a00_098de9daac98.slice - libcontainer container kubepods-besteffort-pod49d857ea_b436_466c_8a00_098de9daac98.slice. Jan 19 13:05:04.130842 kubelet[2939]: I0119 13:05:04.130198 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-cni-log-dir\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.132841 kubelet[2939]: I0119 13:05:04.131051 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-var-lib-calico\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.132841 kubelet[2939]: I0119 13:05:04.131096 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-var-run-calico\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.132841 kubelet[2939]: I0119 13:05:04.131127 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-lib-modules\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.132841 kubelet[2939]: I0119 13:05:04.131153 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49d857ea-b436-466c-8a00-098de9daac98-node-certs\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.132841 kubelet[2939]: I0119 13:05:04.131207 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-cni-bin-dir\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133086 kubelet[2939]: I0119 13:05:04.131263 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49d857ea-b436-466c-8a00-098de9daac98-tigera-ca-bundle\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133086 kubelet[2939]: I0119 13:05:04.131293 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfprk\" (UniqueName: \"kubernetes.io/projected/49d857ea-b436-466c-8a00-098de9daac98-kube-api-access-zfprk\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133086 kubelet[2939]: I0119 13:05:04.131322 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-cni-net-dir\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133086 kubelet[2939]: I0119 13:05:04.131361 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-policysync\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133086 kubelet[2939]: I0119 13:05:04.131400 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-flexvol-driver-host\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.133459 kubelet[2939]: I0119 13:05:04.131425 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49d857ea-b436-466c-8a00-098de9daac98-xtables-lock\") pod \"calico-node-b7wkm\" (UID: \"49d857ea-b436-466c-8a00-098de9daac98\") " pod="calico-system/calico-node-b7wkm" Jan 19 13:05:04.210430 kubelet[2939]: E0119 13:05:04.209460 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:04.231776 kubelet[2939]: I0119 13:05:04.231723 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9533a5f4-a04a-442d-b08c-488e8c9d1e7c-socket-dir\") pod \"csi-node-driver-tzctk\" (UID: \"9533a5f4-a04a-442d-b08c-488e8c9d1e7c\") " pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:04.233470 kubelet[2939]: I0119 13:05:04.231791 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkt48\" (UniqueName: \"kubernetes.io/projected/9533a5f4-a04a-442d-b08c-488e8c9d1e7c-kube-api-access-fkt48\") pod \"csi-node-driver-tzctk\" (UID: \"9533a5f4-a04a-442d-b08c-488e8c9d1e7c\") " pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:04.233470 kubelet[2939]: I0119 13:05:04.233437 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9533a5f4-a04a-442d-b08c-488e8c9d1e7c-registration-dir\") pod \"csi-node-driver-tzctk\" (UID: \"9533a5f4-a04a-442d-b08c-488e8c9d1e7c\") " pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:04.233790 kubelet[2939]: I0119 13:05:04.233476 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9533a5f4-a04a-442d-b08c-488e8c9d1e7c-varrun\") pod \"csi-node-driver-tzctk\" (UID: \"9533a5f4-a04a-442d-b08c-488e8c9d1e7c\") " pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:04.235756 kubelet[2939]: I0119 13:05:04.235697 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9533a5f4-a04a-442d-b08c-488e8c9d1e7c-kubelet-dir\") pod \"csi-node-driver-tzctk\" (UID: \"9533a5f4-a04a-442d-b08c-488e8c9d1e7c\") " pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:04.251068 kubelet[2939]: E0119 13:05:04.250650 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.251068 kubelet[2939]: W0119 13:05:04.250721 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.251068 kubelet[2939]: E0119 13:05:04.250950 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.257440 kubelet[2939]: E0119 13:05:04.257255 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.257440 kubelet[2939]: W0119 13:05:04.257278 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.257592 containerd[1646]: time="2026-01-19T13:05:04.257550605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bf57c668-7jfnt,Uid:741b8339-8e95-4f63-8b40-fa832cf29730,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:04.259152 kubelet[2939]: E0119 13:05:04.258212 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.289756 kubelet[2939]: E0119 13:05:04.289713 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.289922 kubelet[2939]: W0119 13:05:04.289773 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.290479 kubelet[2939]: E0119 13:05:04.290442 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.338120 kubelet[2939]: E0119 13:05:04.338072 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.338120 kubelet[2939]: W0119 13:05:04.338103 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.338120 kubelet[2939]: E0119 13:05:04.338131 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.339207 kubelet[2939]: E0119 13:05:04.339124 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.339426 kubelet[2939]: W0119 13:05:04.339264 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.339426 kubelet[2939]: E0119 13:05:04.339297 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.340115 kubelet[2939]: E0119 13:05:04.340092 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.340115 kubelet[2939]: W0119 13:05:04.340112 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.340724 kubelet[2939]: E0119 13:05:04.340625 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.341406 kubelet[2939]: E0119 13:05:04.341363 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.341976 kubelet[2939]: W0119 13:05:04.341591 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.341976 kubelet[2939]: E0119 13:05:04.341624 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.342583 kubelet[2939]: E0119 13:05:04.342427 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.342866 kubelet[2939]: W0119 13:05:04.342769 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.343160 kubelet[2939]: E0119 13:05:04.343112 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.344159 kubelet[2939]: E0119 13:05:04.344138 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.344348 kubelet[2939]: W0119 13:05:04.344325 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.344921 kubelet[2939]: E0119 13:05:04.344883 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.345426 kubelet[2939]: E0119 13:05:04.345227 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.345426 kubelet[2939]: W0119 13:05:04.345246 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.345426 kubelet[2939]: E0119 13:05:04.345425 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.347197 kubelet[2939]: E0119 13:05:04.347049 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.347197 kubelet[2939]: W0119 13:05:04.347070 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.347197 kubelet[2939]: E0119 13:05:04.347113 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.349098 kubelet[2939]: E0119 13:05:04.348662 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.349098 kubelet[2939]: W0119 13:05:04.348685 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.349098 kubelet[2939]: E0119 13:05:04.348985 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.349098 kubelet[2939]: W0119 13:05:04.348999 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.350380 kubelet[2939]: E0119 13:05:04.350359 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.350520 kubelet[2939]: W0119 13:05:04.350497 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.351330 kubelet[2939]: E0119 13:05:04.351103 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.351330 kubelet[2939]: W0119 13:05:04.351124 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.352147 containerd[1646]: time="2026-01-19T13:05:04.351410320Z" level=info msg="connecting to shim 2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790" address="unix:///run/containerd/s/46b4c79d047ce877b4f1942ab8fda117cba6fb72619d6b5597a162abd9feed0b" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:04.352238 kubelet[2939]: E0119 13:05:04.351698 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.352238 kubelet[2939]: E0119 13:05:04.351720 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.352238 kubelet[2939]: E0119 13:05:04.351758 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.352238 kubelet[2939]: E0119 13:05:04.352195 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.352962 kubelet[2939]: E0119 13:05:04.352350 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.352962 kubelet[2939]: W0119 13:05:04.352603 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.352962 kubelet[2939]: E0119 13:05:04.352629 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.353776 kubelet[2939]: E0119 13:05:04.353683 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.354050 kubelet[2939]: W0119 13:05:04.354014 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.354050 kubelet[2939]: E0119 13:05:04.354042 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.354710 kubelet[2939]: E0119 13:05:04.354681 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.354777 kubelet[2939]: W0119 13:05:04.354703 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.355031 kubelet[2939]: E0119 13:05:04.354859 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.355463 kubelet[2939]: E0119 13:05:04.355436 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.355463 kubelet[2939]: W0119 13:05:04.355456 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.356341 kubelet[2939]: E0119 13:05:04.356184 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.356638 kubelet[2939]: E0119 13:05:04.356621 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.356638 kubelet[2939]: W0119 13:05:04.356636 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.357017 kubelet[2939]: E0119 13:05:04.356988 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.359090 kubelet[2939]: E0119 13:05:04.359048 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.359212 kubelet[2939]: W0119 13:05:04.359161 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.359650 kubelet[2939]: E0119 13:05:04.359623 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.360331 kubelet[2939]: E0119 13:05:04.360295 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.360850 kubelet[2939]: W0119 13:05:04.360732 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.361387 kubelet[2939]: E0119 13:05:04.361310 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.362776 kubelet[2939]: E0119 13:05:04.362749 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.362776 kubelet[2939]: W0119 13:05:04.362771 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.363010 kubelet[2939]: E0119 13:05:04.362864 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.364223 kubelet[2939]: E0119 13:05:04.364197 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.364223 kubelet[2939]: W0119 13:05:04.364218 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.364735 kubelet[2939]: E0119 13:05:04.364671 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.365016 kubelet[2939]: E0119 13:05:04.364991 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.365016 kubelet[2939]: W0119 13:05:04.365011 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.365126 kubelet[2939]: E0119 13:05:04.365036 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.365932 kubelet[2939]: E0119 13:05:04.365905 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.366343 kubelet[2939]: W0119 13:05:04.366029 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.366343 kubelet[2939]: E0119 13:05:04.366210 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.366918 kubelet[2939]: E0119 13:05:04.366893 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.366918 kubelet[2939]: W0119 13:05:04.366915 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.367137 kubelet[2939]: E0119 13:05:04.366931 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.368288 kubelet[2939]: E0119 13:05:04.368259 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.368288 kubelet[2939]: W0119 13:05:04.368281 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.368436 kubelet[2939]: E0119 13:05:04.368326 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.407179 kubelet[2939]: E0119 13:05:04.407126 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:04.407179 kubelet[2939]: W0119 13:05:04.407157 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:04.407179 kubelet[2939]: E0119 13:05:04.407187 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:04.422550 containerd[1646]: time="2026-01-19T13:05:04.422489352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7wkm,Uid:49d857ea-b436-466c-8a00-098de9daac98,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:04.442762 systemd[1]: Started cri-containerd-2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790.scope - libcontainer container 2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790. Jan 19 13:05:04.525620 containerd[1646]: time="2026-01-19T13:05:04.525038598Z" level=info msg="connecting to shim caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d" address="unix:///run/containerd/s/03170340e3099550421ae893d1f179ae8a4c895f5b485a0e7d33caf2e3be31c4" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:04.562000 audit: BPF prog-id=155 op=LOAD Jan 19 13:05:04.563000 audit: BPF prog-id=156 op=LOAD Jan 19 13:05:04.563000 audit[3407]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.564000 audit: BPF prog-id=156 op=UNLOAD Jan 19 13:05:04.564000 audit[3407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.565000 audit: BPF prog-id=157 op=LOAD Jan 19 13:05:04.565000 audit[3407]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.565000 audit: BPF prog-id=158 op=LOAD Jan 19 13:05:04.565000 audit[3407]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.565000 audit: BPF prog-id=158 op=UNLOAD Jan 19 13:05:04.565000 audit[3407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.568000 audit: BPF prog-id=157 op=UNLOAD Jan 19 13:05:04.568000 audit[3407]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.568000 audit: BPF prog-id=159 op=LOAD Jan 19 13:05:04.568000 audit[3407]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3374 pid=3407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3232313765393238306163633839323038396337646435363831663637 Jan 19 13:05:04.597112 systemd[1]: Started cri-containerd-caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d.scope - libcontainer container caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d. Jan 19 13:05:04.619000 audit: BPF prog-id=160 op=LOAD Jan 19 13:05:04.620000 audit: BPF prog-id=161 op=LOAD Jan 19 13:05:04.620000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.620000 audit: BPF prog-id=161 op=UNLOAD Jan 19 13:05:04.620000 audit[3448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.620000 audit: BPF prog-id=162 op=LOAD Jan 19 13:05:04.620000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.621000 audit: BPF prog-id=163 op=LOAD Jan 19 13:05:04.621000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.621000 audit: BPF prog-id=163 op=UNLOAD Jan 19 13:05:04.621000 audit[3448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.621000 audit: BPF prog-id=162 op=UNLOAD Jan 19 13:05:04.621000 audit[3448]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.621000 audit: BPF prog-id=164 op=LOAD Jan 19 13:05:04.621000 audit[3448]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3431 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:04.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361663265343635333863316531646233373065353139616435633565 Jan 19 13:05:04.674839 containerd[1646]: time="2026-01-19T13:05:04.674744512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54bf57c668-7jfnt,Uid:741b8339-8e95-4f63-8b40-fa832cf29730,Namespace:calico-system,Attempt:0,} returns sandbox id \"2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790\"" Jan 19 13:05:04.678411 containerd[1646]: time="2026-01-19T13:05:04.678339208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 19 13:05:04.680466 containerd[1646]: time="2026-01-19T13:05:04.680429276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b7wkm,Uid:49d857ea-b436-466c-8a00-098de9daac98,Namespace:calico-system,Attempt:0,} returns sandbox id \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\"" Jan 19 13:05:06.092347 kubelet[2939]: E0119 13:05:06.091546 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:06.195211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039602830.mount: Deactivated successfully. Jan 19 13:05:08.097759 kubelet[2939]: E0119 13:05:08.097299 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:08.198634 containerd[1646]: time="2026-01-19T13:05:08.198572746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:08.213653 containerd[1646]: time="2026-01-19T13:05:08.199246907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 19 13:05:08.215738 containerd[1646]: time="2026-01-19T13:05:08.202559441Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:08.215738 containerd[1646]: time="2026-01-19T13:05:08.206099653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.525459368s" Jan 19 13:05:08.215738 containerd[1646]: time="2026-01-19T13:05:08.214513434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 19 13:05:08.215738 containerd[1646]: time="2026-01-19T13:05:08.214911584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:08.217042 containerd[1646]: time="2026-01-19T13:05:08.216990080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 19 13:05:08.240562 containerd[1646]: time="2026-01-19T13:05:08.240495236Z" level=info msg="CreateContainer within sandbox \"2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 19 13:05:08.257947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756206306.mount: Deactivated successfully. Jan 19 13:05:08.260964 containerd[1646]: time="2026-01-19T13:05:08.260924643Z" level=info msg="Container 42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:08.274555 containerd[1646]: time="2026-01-19T13:05:08.274412151Z" level=info msg="CreateContainer within sandbox \"2217e9280acc892089c7dd5681f674d90ede6a3c5a987d47d7bbdbcead08a790\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d\"" Jan 19 13:05:08.276909 containerd[1646]: time="2026-01-19T13:05:08.276278187Z" level=info msg="StartContainer for \"42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d\"" Jan 19 13:05:08.278487 containerd[1646]: time="2026-01-19T13:05:08.278444744Z" level=info msg="connecting to shim 42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d" address="unix:///run/containerd/s/46b4c79d047ce877b4f1942ab8fda117cba6fb72619d6b5597a162abd9feed0b" protocol=ttrpc version=3 Jan 19 13:05:08.346093 systemd[1]: Started cri-containerd-42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d.scope - libcontainer container 42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d. Jan 19 13:05:08.382555 kernel: kauditd_printk_skb: 46 callbacks suppressed Jan 19 13:05:08.382945 kernel: audit: type=1334 audit(1768827908.376:558): prog-id=165 op=LOAD Jan 19 13:05:08.376000 audit: BPF prog-id=165 op=LOAD Jan 19 13:05:08.380000 audit: BPF prog-id=166 op=LOAD Jan 19 13:05:08.385885 kernel: audit: type=1334 audit(1768827908.380:559): prog-id=166 op=LOAD Jan 19 13:05:08.385968 kernel: audit: type=1300 audit(1768827908.380:559): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.396882 kernel: audit: type=1327 audit(1768827908.380:559): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.380000 audit: BPF prog-id=166 op=UNLOAD Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.400450 kernel: audit: type=1334 audit(1768827908.380:560): prog-id=166 op=UNLOAD Jan 19 13:05:08.400545 kernel: audit: type=1300 audit(1768827908.380:560): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.405288 kernel: audit: type=1327 audit(1768827908.380:560): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.380000 audit: BPF prog-id=167 op=LOAD Jan 19 13:05:08.409044 kernel: audit: type=1334 audit(1768827908.380:561): prog-id=167 op=LOAD Jan 19 13:05:08.409921 kernel: audit: type=1300 audit(1768827908.380:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.416632 kernel: audit: type=1327 audit(1768827908.380:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.380000 audit: BPF prog-id=168 op=LOAD Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.380000 audit: BPF prog-id=168 op=UNLOAD Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.380000 audit: BPF prog-id=167 op=UNLOAD Jan 19 13:05:08.380000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.381000 audit: BPF prog-id=169 op=LOAD Jan 19 13:05:08.381000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3374 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:08.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432623466386165346664653132636333393432393038326333366262 Jan 19 13:05:08.470341 containerd[1646]: time="2026-01-19T13:05:08.470266391Z" level=info msg="StartContainer for \"42b4f8ae4fde12cc39429082c36bb94f13268f14c20b093288904ff75936f64d\" returns successfully" Jan 19 13:05:09.302150 kubelet[2939]: I0119 13:05:09.301506 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54bf57c668-7jfnt" podStartSLOduration=2.7623857689999998 podStartE2EDuration="6.301288907s" podCreationTimestamp="2026-01-19 13:05:03 +0000 UTC" firstStartedPulling="2026-01-19 13:05:04.677467532 +0000 UTC m=+26.830362137" lastFinishedPulling="2026-01-19 13:05:08.216370658 +0000 UTC m=+30.369265275" observedRunningTime="2026-01-19 13:05:09.299409466 +0000 UTC m=+31.452304096" watchObservedRunningTime="2026-01-19 13:05:09.301288907 +0000 UTC m=+31.454183512" Jan 19 13:05:09.354143 kubelet[2939]: E0119 13:05:09.354090 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.354143 kubelet[2939]: W0119 13:05:09.354128 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.356078 kubelet[2939]: E0119 13:05:09.356041 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.356424 kubelet[2939]: E0119 13:05:09.356390 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.356424 kubelet[2939]: W0119 13:05:09.356410 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.356591 kubelet[2939]: E0119 13:05:09.356430 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.357418 kubelet[2939]: E0119 13:05:09.356764 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.357418 kubelet[2939]: W0119 13:05:09.356779 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.357418 kubelet[2939]: E0119 13:05:09.356795 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.360267 kubelet[2939]: E0119 13:05:09.360224 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.360267 kubelet[2939]: W0119 13:05:09.360257 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.360398 kubelet[2939]: E0119 13:05:09.360275 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.360640 kubelet[2939]: E0119 13:05:09.360589 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.360640 kubelet[2939]: W0119 13:05:09.360629 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.360732 kubelet[2939]: E0119 13:05:09.360647 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.361032 kubelet[2939]: E0119 13:05:09.360996 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.361032 kubelet[2939]: W0119 13:05:09.361021 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.361140 kubelet[2939]: E0119 13:05:09.361038 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.361359 kubelet[2939]: E0119 13:05:09.361310 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.361359 kubelet[2939]: W0119 13:05:09.361351 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.361479 kubelet[2939]: E0119 13:05:09.361370 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.361674 kubelet[2939]: E0119 13:05:09.361640 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.361674 kubelet[2939]: W0119 13:05:09.361664 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.361804 kubelet[2939]: E0119 13:05:09.361680 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.361988 kubelet[2939]: E0119 13:05:09.361964 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.361988 kubelet[2939]: W0119 13:05:09.361983 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.362105 kubelet[2939]: E0119 13:05:09.361999 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.362273 kubelet[2939]: E0119 13:05:09.362251 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.362320 kubelet[2939]: W0119 13:05:09.362272 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.362320 kubelet[2939]: E0119 13:05:09.362288 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.362567 kubelet[2939]: E0119 13:05:09.362547 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.362567 kubelet[2939]: W0119 13:05:09.362566 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.362661 kubelet[2939]: E0119 13:05:09.362581 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.362860 kubelet[2939]: E0119 13:05:09.362835 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.362860 kubelet[2939]: W0119 13:05:09.362854 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.363230 kubelet[2939]: E0119 13:05:09.362869 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.363230 kubelet[2939]: E0119 13:05:09.363129 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.363230 kubelet[2939]: W0119 13:05:09.363142 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.363230 kubelet[2939]: E0119 13:05:09.363166 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.363423 kubelet[2939]: E0119 13:05:09.363402 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.363423 kubelet[2939]: W0119 13:05:09.363420 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.363527 kubelet[2939]: E0119 13:05:09.363434 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.363694 kubelet[2939]: E0119 13:05:09.363673 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.363694 kubelet[2939]: W0119 13:05:09.363692 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.363870 kubelet[2939]: E0119 13:05:09.363707 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.395523 kubelet[2939]: E0119 13:05:09.395377 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.395523 kubelet[2939]: W0119 13:05:09.395410 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.395523 kubelet[2939]: E0119 13:05:09.395432 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.396659 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.397773 kubelet[2939]: W0119 13:05:09.396681 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.396702 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.397061 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.397773 kubelet[2939]: W0119 13:05:09.397075 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.397090 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.397360 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.397773 kubelet[2939]: W0119 13:05:09.397373 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.397773 kubelet[2939]: E0119 13:05:09.397561 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.398753 kubelet[2939]: E0119 13:05:09.398730 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.398753 kubelet[2939]: W0119 13:05:09.398750 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.398987 kubelet[2939]: E0119 13:05:09.398860 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.399164 kubelet[2939]: E0119 13:05:09.399142 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.399164 kubelet[2939]: W0119 13:05:09.399160 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.399385 kubelet[2939]: E0119 13:05:09.399352 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.399726 kubelet[2939]: E0119 13:05:09.399688 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.399726 kubelet[2939]: W0119 13:05:09.399709 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.399846 kubelet[2939]: E0119 13:05:09.399803 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.400232 kubelet[2939]: E0119 13:05:09.400165 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.400232 kubelet[2939]: W0119 13:05:09.400186 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.400232 kubelet[2939]: E0119 13:05:09.400203 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.401063 kubelet[2939]: E0119 13:05:09.400951 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.401063 kubelet[2939]: W0119 13:05:09.400972 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.401063 kubelet[2939]: E0119 13:05:09.401008 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.401914 kubelet[2939]: E0119 13:05:09.401624 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.401914 kubelet[2939]: W0119 13:05:09.401638 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.402174 kubelet[2939]: E0119 13:05:09.402121 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.402174 kubelet[2939]: W0119 13:05:09.402136 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.402527 kubelet[2939]: E0119 13:05:09.402387 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.402527 kubelet[2939]: W0119 13:05:09.402401 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.402527 kubelet[2939]: E0119 13:05:09.402450 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.403154 kubelet[2939]: E0119 13:05:09.402978 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.403154 kubelet[2939]: W0119 13:05:09.402998 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.403154 kubelet[2939]: E0119 13:05:09.403013 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.403154 kubelet[2939]: E0119 13:05:09.403041 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.403624 kubelet[2939]: E0119 13:05:09.403468 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.403789 kubelet[2939]: E0119 13:05:09.403748 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.403789 kubelet[2939]: W0119 13:05:09.403763 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.403967 kubelet[2939]: E0119 13:05:09.403786 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.404804 kubelet[2939]: E0119 13:05:09.404780 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.404804 kubelet[2939]: W0119 13:05:09.404800 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.404940 kubelet[2939]: E0119 13:05:09.404851 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.405560 kubelet[2939]: E0119 13:05:09.405521 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.405735 kubelet[2939]: W0119 13:05:09.405656 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.405735 kubelet[2939]: E0119 13:05:09.405690 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.407088 kubelet[2939]: E0119 13:05:09.407067 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.407241 kubelet[2939]: W0119 13:05:09.407142 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.407398 kubelet[2939]: E0119 13:05:09.407333 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.407991 kubelet[2939]: E0119 13:05:09.407904 2939 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 19 13:05:09.407991 kubelet[2939]: W0119 13:05:09.407923 2939 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 19 13:05:09.407991 kubelet[2939]: E0119 13:05:09.407940 2939 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 19 13:05:09.447000 audit[3568]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:09.447000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffb6c60470 a2=0 a3=7fffb6c6045c items=0 ppid=3085 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:09.451000 audit[3568]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3568 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:09.451000 audit[3568]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffb6c60470 a2=0 a3=7fffb6c6045c items=0 ppid=3085 pid=3568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.451000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:09.826375 containerd[1646]: time="2026-01-19T13:05:09.826303293Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:09.827919 containerd[1646]: time="2026-01-19T13:05:09.827864682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:09.829519 containerd[1646]: time="2026-01-19T13:05:09.829147005Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:09.831925 containerd[1646]: time="2026-01-19T13:05:09.831770654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:09.832998 containerd[1646]: time="2026-01-19T13:05:09.832863638Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.615721141s" Jan 19 13:05:09.832998 containerd[1646]: time="2026-01-19T13:05:09.832907807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 19 13:05:09.838645 containerd[1646]: time="2026-01-19T13:05:09.837853342Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 19 13:05:09.851128 containerd[1646]: time="2026-01-19T13:05:09.851067303Z" level=info msg="Container d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:09.862037 containerd[1646]: time="2026-01-19T13:05:09.861913507Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e\"" Jan 19 13:05:09.862910 containerd[1646]: time="2026-01-19T13:05:09.862866690Z" level=info msg="StartContainer for \"d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e\"" Jan 19 13:05:09.865414 containerd[1646]: time="2026-01-19T13:05:09.865356154Z" level=info msg="connecting to shim d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e" address="unix:///run/containerd/s/03170340e3099550421ae893d1f179ae8a4c895f5b485a0e7d33caf2e3be31c4" protocol=ttrpc version=3 Jan 19 13:05:09.908101 systemd[1]: Started cri-containerd-d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e.scope - libcontainer container d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e. Jan 19 13:05:09.995000 audit: BPF prog-id=170 op=LOAD Jan 19 13:05:09.995000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3431 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433363735643933306138353831363431333162346338383333626332 Jan 19 13:05:09.995000 audit: BPF prog-id=171 op=LOAD Jan 19 13:05:09.995000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3431 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433363735643933306138353831363431333162346338383333626332 Jan 19 13:05:09.995000 audit: BPF prog-id=171 op=UNLOAD Jan 19 13:05:09.995000 audit[3573]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433363735643933306138353831363431333162346338383333626332 Jan 19 13:05:09.995000 audit: BPF prog-id=170 op=UNLOAD Jan 19 13:05:09.995000 audit[3573]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433363735643933306138353831363431333162346338383333626332 Jan 19 13:05:09.995000 audit: BPF prog-id=172 op=LOAD Jan 19 13:05:09.995000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3431 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:09.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433363735643933306138353831363431333162346338383333626332 Jan 19 13:05:10.078235 systemd[1]: cri-containerd-d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e.scope: Deactivated successfully. Jan 19 13:05:10.083000 audit: BPF prog-id=172 op=UNLOAD Jan 19 13:05:10.096980 kubelet[2939]: E0119 13:05:10.095735 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:10.119095 containerd[1646]: time="2026-01-19T13:05:10.117629968Z" level=info msg="StartContainer for \"d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e\" returns successfully" Jan 19 13:05:10.130733 containerd[1646]: time="2026-01-19T13:05:10.130690591Z" level=info msg="received container exit event container_id:\"d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e\" id:\"d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e\" pid:3587 exited_at:{seconds:1768827910 nanos:93458799}" Jan 19 13:05:10.172990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3675d930a858164131b4c8833bc29691fdd86b3df548ef688097b49fd14f07e-rootfs.mount: Deactivated successfully. Jan 19 13:05:11.299153 containerd[1646]: time="2026-01-19T13:05:11.299103456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 19 13:05:12.092462 kubelet[2939]: E0119 13:05:12.092319 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:14.096733 kubelet[2939]: E0119 13:05:14.096256 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:16.092135 kubelet[2939]: E0119 13:05:16.092072 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:17.113990 containerd[1646]: time="2026-01-19T13:05:17.113920477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:17.115706 containerd[1646]: time="2026-01-19T13:05:17.115514281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 19 13:05:17.116554 containerd[1646]: time="2026-01-19T13:05:17.116513603Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:17.119125 containerd[1646]: time="2026-01-19T13:05:17.119076232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:17.120668 containerd[1646]: time="2026-01-19T13:05:17.120467574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.821312558s" Jan 19 13:05:17.120668 containerd[1646]: time="2026-01-19T13:05:17.120520064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 19 13:05:17.123834 containerd[1646]: time="2026-01-19T13:05:17.123758755Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 19 13:05:17.200411 containerd[1646]: time="2026-01-19T13:05:17.200168856Z" level=info msg="Container 7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:17.219385 containerd[1646]: time="2026-01-19T13:05:17.217715537Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623\"" Jan 19 13:05:17.220591 containerd[1646]: time="2026-01-19T13:05:17.220541793Z" level=info msg="StartContainer for \"7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623\"" Jan 19 13:05:17.223440 containerd[1646]: time="2026-01-19T13:05:17.223401558Z" level=info msg="connecting to shim 7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623" address="unix:///run/containerd/s/03170340e3099550421ae893d1f179ae8a4c895f5b485a0e7d33caf2e3be31c4" protocol=ttrpc version=3 Jan 19 13:05:17.266224 systemd[1]: Started cri-containerd-7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623.scope - libcontainer container 7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623. Jan 19 13:05:17.364553 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 19 13:05:17.365583 kernel: audit: type=1334 audit(1768827917.354:574): prog-id=173 op=LOAD Jan 19 13:05:17.354000 audit: BPF prog-id=173 op=LOAD Jan 19 13:05:17.354000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.372844 kernel: audit: type=1300 audit(1768827917.354:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.379414 kernel: audit: type=1327 audit(1768827917.354:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.379500 kernel: audit: type=1334 audit(1768827917.354:575): prog-id=174 op=LOAD Jan 19 13:05:17.354000 audit: BPF prog-id=174 op=LOAD Jan 19 13:05:17.354000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.387851 kernel: audit: type=1300 audit(1768827917.354:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.389532 kernel: audit: type=1327 audit(1768827917.354:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.354000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.361000 audit: BPF prog-id=174 op=UNLOAD Jan 19 13:05:17.394954 kernel: audit: type=1334 audit(1768827917.361:576): prog-id=174 op=UNLOAD Jan 19 13:05:17.361000 audit[3635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.402835 kernel: audit: type=1300 audit(1768827917.361:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.402914 kernel: audit: type=1327 audit(1768827917.361:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.362000 audit: BPF prog-id=173 op=UNLOAD Jan 19 13:05:17.407638 kernel: audit: type=1334 audit(1768827917.362:577): prog-id=173 op=UNLOAD Jan 19 13:05:17.362000 audit[3635]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.362000 audit: BPF prog-id=175 op=LOAD Jan 19 13:05:17.362000 audit[3635]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3431 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:17.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732323765326432356461643261303438373665326435356265343533 Jan 19 13:05:17.423195 containerd[1646]: time="2026-01-19T13:05:17.423038363Z" level=info msg="StartContainer for \"7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623\" returns successfully" Jan 19 13:05:18.268920 kubelet[2939]: E0119 13:05:18.268850 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:18.635467 systemd[1]: cri-containerd-7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623.scope: Deactivated successfully. Jan 19 13:05:18.638000 audit: BPF prog-id=175 op=UNLOAD Jan 19 13:05:18.636021 systemd[1]: cri-containerd-7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623.scope: Consumed 800ms CPU time, 162.9M memory peak, 7.1M read from disk, 171.3M written to disk. Jan 19 13:05:18.641967 containerd[1646]: time="2026-01-19T13:05:18.639620992Z" level=info msg="received container exit event container_id:\"7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623\" id:\"7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623\" pid:3648 exited_at:{seconds:1768827918 nanos:639145139}" Jan 19 13:05:18.710172 kubelet[2939]: I0119 13:05:18.709657 2939 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 19 13:05:18.716192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7227e2d25dad2a04876e2d55be45361a5d8a10e2aafd322c938276b67dae1623-rootfs.mount: Deactivated successfully. Jan 19 13:05:18.815705 systemd[1]: Created slice kubepods-burstable-podd6dd96a7_8506_4053_b6ee_72e24f4dfadf.slice - libcontainer container kubepods-burstable-podd6dd96a7_8506_4053_b6ee_72e24f4dfadf.slice. Jan 19 13:05:18.837950 systemd[1]: Created slice kubepods-burstable-pod08044186_a9c0_43a1_9659_a816af071539.slice - libcontainer container kubepods-burstable-pod08044186_a9c0_43a1_9659_a816af071539.slice. Jan 19 13:05:18.853339 kubelet[2939]: I0119 13:05:18.853223 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08044186-a9c0-43a1-9659-a816af071539-config-volume\") pod \"coredns-668d6bf9bc-d5tkr\" (UID: \"08044186-a9c0-43a1-9659-a816af071539\") " pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:18.853516 kubelet[2939]: I0119 13:05:18.853326 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-backend-key-pair\") pod \"whisker-647dcfb6b6-s9g8c\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:18.853516 kubelet[2939]: I0119 13:05:18.853399 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/207bff47-91b8-40f6-a83c-1de3cb3c792c-calico-apiserver-certs\") pod \"calico-apiserver-77bb946844-22dld\" (UID: \"207bff47-91b8-40f6-a83c-1de3cb3c792c\") " pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:18.853516 kubelet[2939]: I0119 13:05:18.853431 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f32ab2a-e7b2-4a72-8b17-d785aad340e2-goldmane-ca-bundle\") pod \"goldmane-666569f655-nlspt\" (UID: \"2f32ab2a-e7b2-4a72-8b17-d785aad340e2\") " pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:18.853516 kubelet[2939]: I0119 13:05:18.853461 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktl8\" (UniqueName: \"kubernetes.io/projected/ed22f491-3777-46a9-8e11-3aad3f6a2fdc-kube-api-access-8ktl8\") pod \"calico-apiserver-77bb946844-dx6t6\" (UID: \"ed22f491-3777-46a9-8e11-3aad3f6a2fdc\") " pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:18.853516 kubelet[2939]: I0119 13:05:18.853500 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7rrl\" (UniqueName: \"kubernetes.io/projected/207bff47-91b8-40f6-a83c-1de3cb3c792c-kube-api-access-c7rrl\") pod \"calico-apiserver-77bb946844-22dld\" (UID: \"207bff47-91b8-40f6-a83c-1de3cb3c792c\") " pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:18.855174 kubelet[2939]: I0119 13:05:18.853529 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhvj\" (UniqueName: \"kubernetes.io/projected/97d7ac57-4a41-4014-aca9-d536c8dde83e-kube-api-access-9zhvj\") pod \"whisker-647dcfb6b6-s9g8c\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:18.855174 kubelet[2939]: I0119 13:05:18.853563 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ed22f491-3777-46a9-8e11-3aad3f6a2fdc-calico-apiserver-certs\") pod \"calico-apiserver-77bb946844-dx6t6\" (UID: \"ed22f491-3777-46a9-8e11-3aad3f6a2fdc\") " pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:18.855174 kubelet[2939]: I0119 13:05:18.853588 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh5hv\" (UniqueName: \"kubernetes.io/projected/2f32ab2a-e7b2-4a72-8b17-d785aad340e2-kube-api-access-hh5hv\") pod \"goldmane-666569f655-nlspt\" (UID: \"2f32ab2a-e7b2-4a72-8b17-d785aad340e2\") " pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:18.855174 kubelet[2939]: I0119 13:05:18.853626 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnhp\" (UniqueName: \"kubernetes.io/projected/2a6c0b6c-6346-4ea5-adab-326e38e7dbe6-kube-api-access-jfnhp\") pod \"calico-kube-controllers-77b44585c9-kqfd8\" (UID: \"2a6c0b6c-6346-4ea5-adab-326e38e7dbe6\") " pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:18.855174 kubelet[2939]: I0119 13:05:18.853653 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f32ab2a-e7b2-4a72-8b17-d785aad340e2-config\") pod \"goldmane-666569f655-nlspt\" (UID: \"2f32ab2a-e7b2-4a72-8b17-d785aad340e2\") " pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:18.858120 kubelet[2939]: I0119 13:05:18.853721 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-ca-bundle\") pod \"whisker-647dcfb6b6-s9g8c\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:18.858120 kubelet[2939]: I0119 13:05:18.853755 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdrjq\" (UniqueName: \"kubernetes.io/projected/d6dd96a7-8506-4053-b6ee-72e24f4dfadf-kube-api-access-hdrjq\") pod \"coredns-668d6bf9bc-stgcr\" (UID: \"d6dd96a7-8506-4053-b6ee-72e24f4dfadf\") " pod="kube-system/coredns-668d6bf9bc-stgcr" Jan 19 13:05:18.858120 kubelet[2939]: I0119 13:05:18.853789 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2f32ab2a-e7b2-4a72-8b17-d785aad340e2-goldmane-key-pair\") pod \"goldmane-666569f655-nlspt\" (UID: \"2f32ab2a-e7b2-4a72-8b17-d785aad340e2\") " pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:18.858120 kubelet[2939]: I0119 13:05:18.854140 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtp8\" (UniqueName: \"kubernetes.io/projected/08044186-a9c0-43a1-9659-a816af071539-kube-api-access-7jtp8\") pod \"coredns-668d6bf9bc-d5tkr\" (UID: \"08044186-a9c0-43a1-9659-a816af071539\") " pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:18.858120 kubelet[2939]: I0119 13:05:18.854174 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a6c0b6c-6346-4ea5-adab-326e38e7dbe6-tigera-ca-bundle\") pod \"calico-kube-controllers-77b44585c9-kqfd8\" (UID: \"2a6c0b6c-6346-4ea5-adab-326e38e7dbe6\") " pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:18.856951 systemd[1]: Created slice kubepods-besteffort-pod2a6c0b6c_6346_4ea5_adab_326e38e7dbe6.slice - libcontainer container kubepods-besteffort-pod2a6c0b6c_6346_4ea5_adab_326e38e7dbe6.slice. Jan 19 13:05:18.859338 kubelet[2939]: I0119 13:05:18.854274 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6dd96a7-8506-4053-b6ee-72e24f4dfadf-config-volume\") pod \"coredns-668d6bf9bc-stgcr\" (UID: \"d6dd96a7-8506-4053-b6ee-72e24f4dfadf\") " pod="kube-system/coredns-668d6bf9bc-stgcr" Jan 19 13:05:18.869641 systemd[1]: Created slice kubepods-besteffort-pod97d7ac57_4a41_4014_aca9_d536c8dde83e.slice - libcontainer container kubepods-besteffort-pod97d7ac57_4a41_4014_aca9_d536c8dde83e.slice. Jan 19 13:05:18.885001 systemd[1]: Created slice kubepods-besteffort-poded22f491_3777_46a9_8e11_3aad3f6a2fdc.slice - libcontainer container kubepods-besteffort-poded22f491_3777_46a9_8e11_3aad3f6a2fdc.slice. Jan 19 13:05:18.896082 systemd[1]: Created slice kubepods-besteffort-pod2f32ab2a_e7b2_4a72_8b17_d785aad340e2.slice - libcontainer container kubepods-besteffort-pod2f32ab2a_e7b2_4a72_8b17_d785aad340e2.slice. Jan 19 13:05:18.912156 systemd[1]: Created slice kubepods-besteffort-pod207bff47_91b8_40f6_a83c_1de3cb3c792c.slice - libcontainer container kubepods-besteffort-pod207bff47_91b8_40f6_a83c_1de3cb3c792c.slice. Jan 19 13:05:19.131376 containerd[1646]: time="2026-01-19T13:05:19.131318379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-stgcr,Uid:d6dd96a7-8506-4053-b6ee-72e24f4dfadf,Namespace:kube-system,Attempt:0,}" Jan 19 13:05:19.156598 containerd[1646]: time="2026-01-19T13:05:19.156410685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,}" Jan 19 13:05:19.170510 containerd[1646]: time="2026-01-19T13:05:19.170273558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:19.211585 containerd[1646]: time="2026-01-19T13:05:19.211538091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nlspt,Uid:2f32ab2a-e7b2-4a72-8b17-d785aad340e2,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:19.212476 containerd[1646]: time="2026-01-19T13:05:19.212430469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647dcfb6b6-s9g8c,Uid:97d7ac57-4a41-4014-aca9-d536c8dde83e,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:19.213604 containerd[1646]: time="2026-01-19T13:05:19.213061507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:19.217273 containerd[1646]: time="2026-01-19T13:05:19.217229737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:19.400184 containerd[1646]: time="2026-01-19T13:05:19.399884602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 19 13:05:19.603302 containerd[1646]: time="2026-01-19T13:05:19.603204229Z" level=error msg="Failed to destroy network for sandbox \"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.608356 containerd[1646]: time="2026-01-19T13:05:19.607480144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nlspt,Uid:2f32ab2a-e7b2-4a72-8b17-d785aad340e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.611437 kubelet[2939]: E0119 13:05:19.611351 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.615207 kubelet[2939]: E0119 13:05:19.615158 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:19.615312 kubelet[2939]: E0119 13:05:19.615232 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-nlspt" Jan 19 13:05:19.615369 kubelet[2939]: E0119 13:05:19.615329 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aa8d7b108f652623eff14660b2af2f8bdd01522597714b1051cb61f2c82a945\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:05:19.634504 containerd[1646]: time="2026-01-19T13:05:19.634439384Z" level=error msg="Failed to destroy network for sandbox \"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.647676 containerd[1646]: time="2026-01-19T13:05:19.647621662Z" level=error msg="Failed to destroy network for sandbox \"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.648827 containerd[1646]: time="2026-01-19T13:05:19.648596097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.648827 containerd[1646]: time="2026-01-19T13:05:19.648446554Z" level=error msg="Failed to destroy network for sandbox \"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.649065 kubelet[2939]: E0119 13:05:19.648986 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.649176 kubelet[2939]: E0119 13:05:19.649066 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:19.649176 kubelet[2939]: E0119 13:05:19.649097 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:19.649413 kubelet[2939]: E0119 13:05:19.649173 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52b11830f5931fd54cee31f7417203671ac3685fb87204164ca8fa6c3dc36066\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:05:19.653037 containerd[1646]: time="2026-01-19T13:05:19.652760574Z" level=error msg="Failed to destroy network for sandbox \"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.656154 containerd[1646]: time="2026-01-19T13:05:19.656101264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647dcfb6b6-s9g8c,Uid:97d7ac57-4a41-4014-aca9-d536c8dde83e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.656876 kubelet[2939]: E0119 13:05:19.656502 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.656876 kubelet[2939]: E0119 13:05:19.656579 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:19.656876 kubelet[2939]: E0119 13:05:19.656622 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:19.657061 kubelet[2939]: E0119 13:05:19.656682 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-647dcfb6b6-s9g8c_calico-system(97d7ac57-4a41-4014-aca9-d536c8dde83e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-647dcfb6b6-s9g8c_calico-system(97d7ac57-4a41-4014-aca9-d536c8dde83e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c81ae08f58242064d73a38922565aca713642e186c55a125a5c08906f38dca4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-647dcfb6b6-s9g8c" podUID="97d7ac57-4a41-4014-aca9-d536c8dde83e" Jan 19 13:05:19.660415 containerd[1646]: time="2026-01-19T13:05:19.660330477Z" level=error msg="Failed to destroy network for sandbox \"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.663466 containerd[1646]: time="2026-01-19T13:05:19.662784200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.663990 kubelet[2939]: E0119 13:05:19.663931 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.664196 kubelet[2939]: E0119 13:05:19.664158 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:19.664640 kubelet[2939]: E0119 13:05:19.664526 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:19.665849 kubelet[2939]: E0119 13:05:19.664774 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d5tkr_kube-system(08044186-a9c0-43a1-9659-a816af071539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d5tkr_kube-system(08044186-a9c0-43a1-9659-a816af071539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd93068ae6c92de0bf5fba55d1453e09a9af17c4904e0f0ce48021b31ceacb8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d5tkr" podUID="08044186-a9c0-43a1-9659-a816af071539" Jan 19 13:05:19.666956 containerd[1646]: time="2026-01-19T13:05:19.666736960Z" level=error msg="Failed to destroy network for sandbox \"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.666956 containerd[1646]: time="2026-01-19T13:05:19.667217092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.667810 kubelet[2939]: E0119 13:05:19.667759 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.668096 containerd[1646]: time="2026-01-19T13:05:19.667959223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.668215 kubelet[2939]: E0119 13:05:19.668043 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:19.668215 kubelet[2939]: E0119 13:05:19.668128 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.668215 kubelet[2939]: E0119 13:05:19.668168 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:19.668215 kubelet[2939]: E0119 13:05:19.668194 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:19.669089 kubelet[2939]: E0119 13:05:19.668251 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebf5e53c43f9ef860d088ab8f191e8e4cd48b4b1e8a5b8f7218c1e88952b6a51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:05:19.669089 kubelet[2939]: E0119 13:05:19.668538 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:19.669224 kubelet[2939]: E0119 13:05:19.668650 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8fc4c7aeddfa2d544814e20ce6f5a675ba3c3d38fa00c59f9a0abec84ef3068\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:05:19.671955 containerd[1646]: time="2026-01-19T13:05:19.671680553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-stgcr,Uid:d6dd96a7-8506-4053-b6ee-72e24f4dfadf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.672696 kubelet[2939]: E0119 13:05:19.671885 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:19.672771 kubelet[2939]: E0119 13:05:19.672709 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-stgcr" Jan 19 13:05:19.672771 kubelet[2939]: E0119 13:05:19.672748 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-stgcr" Jan 19 13:05:19.673544 kubelet[2939]: E0119 13:05:19.672799 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-stgcr_kube-system(d6dd96a7-8506-4053-b6ee-72e24f4dfadf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-stgcr_kube-system(d6dd96a7-8506-4053-b6ee-72e24f4dfadf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0af13ec2561b94930810db86c2c4ad1225dc1629ae2360095856da6148e0996\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-stgcr" podUID="d6dd96a7-8506-4053-b6ee-72e24f4dfadf" Jan 19 13:05:19.715130 systemd[1]: run-netns-cni\x2dfe060633\x2d61bb\x2df132\x2da0e8\x2d950e27782f4b.mount: Deactivated successfully. Jan 19 13:05:20.122444 systemd[1]: Created slice kubepods-besteffort-pod9533a5f4_a04a_442d_b08c_488e8c9d1e7c.slice - libcontainer container kubepods-besteffort-pod9533a5f4_a04a_442d_b08c_488e8c9d1e7c.slice. Jan 19 13:05:20.126786 containerd[1646]: time="2026-01-19T13:05:20.126738741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzctk,Uid:9533a5f4-a04a-442d-b08c-488e8c9d1e7c,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:20.205857 containerd[1646]: time="2026-01-19T13:05:20.205705689Z" level=error msg="Failed to destroy network for sandbox \"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:20.208779 systemd[1]: run-netns-cni\x2db35f7643\x2dd977\x2d3553\x2d8d29\x2dbe0cb997eacc.mount: Deactivated successfully. Jan 19 13:05:20.213601 containerd[1646]: time="2026-01-19T13:05:20.212966210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzctk,Uid:9533a5f4-a04a-442d-b08c-488e8c9d1e7c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:20.214066 kubelet[2939]: E0119 13:05:20.213998 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:20.214153 kubelet[2939]: E0119 13:05:20.214093 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:20.214153 kubelet[2939]: E0119 13:05:20.214134 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tzctk" Jan 19 13:05:20.214382 kubelet[2939]: E0119 13:05:20.214203 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51296ea9c0795ec940b7590a383f0de1a80174f29ce6aeee0e77ace6d005875c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:24.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.243.74.46:22-188.166.92.220:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:24.364538 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 19 13:05:24.369662 kernel: audit: type=1130 audit(1768827924.354:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.243.74.46:22-188.166.92.220:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:24.355081 systemd[1]: Started sshd@10-10.243.74.46:22-188.166.92.220:51530.service - OpenSSH per-connection server daemon (188.166.92.220:51530). Jan 19 13:05:24.586188 sshd[3898]: Connection closed by authenticating user root 188.166.92.220 port 51530 [preauth] Jan 19 13:05:24.585000 audit[3898]: USER_ERR pid=3898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:05:24.591951 kernel: audit: type=1109 audit(1768827924.585:581): pid=3898 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:05:24.593522 systemd[1]: sshd@10-10.243.74.46:22-188.166.92.220:51530.service: Deactivated successfully. Jan 19 13:05:24.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.243.74.46:22-188.166.92.220:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:24.599858 kernel: audit: type=1131 audit(1768827924.594:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.243.74.46:22-188.166.92.220:51530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:30.100591 containerd[1646]: time="2026-01-19T13:05:30.100199070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:30.300752 containerd[1646]: time="2026-01-19T13:05:30.300684331Z" level=error msg="Failed to destroy network for sandbox \"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:30.305471 systemd[1]: run-netns-cni\x2dbe139a98\x2d093e\x2d1ca5\x2d217a\x2d80cd5ede0a3b.mount: Deactivated successfully. Jan 19 13:05:30.307914 containerd[1646]: time="2026-01-19T13:05:30.307170591Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:30.308075 kubelet[2939]: E0119 13:05:30.307587 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:30.308075 kubelet[2939]: E0119 13:05:30.307668 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:30.308075 kubelet[2939]: E0119 13:05:30.307712 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" Jan 19 13:05:30.310448 kubelet[2939]: E0119 13:05:30.307773 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1d820934c352b403a361d5a7e68ee4cc405f1dac064efafe343ccfb4d819079\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:05:31.754485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777021861.mount: Deactivated successfully. Jan 19 13:05:31.851705 containerd[1646]: time="2026-01-19T13:05:31.851317504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:31.875240 containerd[1646]: time="2026-01-19T13:05:31.874761464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 19 13:05:31.928572 containerd[1646]: time="2026-01-19T13:05:31.928511984Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:31.933852 containerd[1646]: time="2026-01-19T13:05:31.932901336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 19 13:05:31.934028 containerd[1646]: time="2026-01-19T13:05:31.933974437Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.533989138s" Jan 19 13:05:31.944323 containerd[1646]: time="2026-01-19T13:05:31.944289563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 19 13:05:31.992300 containerd[1646]: time="2026-01-19T13:05:31.992253070Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 19 13:05:32.088201 containerd[1646]: time="2026-01-19T13:05:32.088062506Z" level=info msg="Container d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:32.090586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2640380388.mount: Deactivated successfully. Jan 19 13:05:32.093241 containerd[1646]: time="2026-01-19T13:05:32.092939253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,}" Jan 19 13:05:32.208934 containerd[1646]: time="2026-01-19T13:05:32.208812118Z" level=info msg="CreateContainer within sandbox \"caf2e46538c1e1db370e519ad5c5eb59fd8321fd9dacceb1d402809cead2428d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d\"" Jan 19 13:05:32.212791 containerd[1646]: time="2026-01-19T13:05:32.212753586Z" level=info msg="StartContainer for \"d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d\"" Jan 19 13:05:32.218851 containerd[1646]: time="2026-01-19T13:05:32.218081135Z" level=info msg="connecting to shim d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d" address="unix:///run/containerd/s/03170340e3099550421ae893d1f179ae8a4c895f5b485a0e7d33caf2e3be31c4" protocol=ttrpc version=3 Jan 19 13:05:32.225082 containerd[1646]: time="2026-01-19T13:05:32.225005371Z" level=error msg="Failed to destroy network for sandbox \"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:32.229581 containerd[1646]: time="2026-01-19T13:05:32.229352111Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:32.230028 kubelet[2939]: E0119 13:05:32.229969 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:32.235538 kubelet[2939]: E0119 13:05:32.230529 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:32.236340 kubelet[2939]: E0119 13:05:32.236111 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d5tkr" Jan 19 13:05:32.236340 kubelet[2939]: E0119 13:05:32.236207 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d5tkr_kube-system(08044186-a9c0-43a1-9659-a816af071539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d5tkr_kube-system(08044186-a9c0-43a1-9659-a816af071539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10bdb55531dec70d251075bb1c3535ede8980657c8edddf3cb33620c54e3eacb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d5tkr" podUID="08044186-a9c0-43a1-9659-a816af071539" Jan 19 13:05:32.335191 systemd[1]: Started cri-containerd-d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d.scope - libcontainer container d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d. Jan 19 13:05:32.429000 audit: BPF prog-id=176 op=LOAD Jan 19 13:05:32.439644 kernel: audit: type=1334 audit(1768827932.429:583): prog-id=176 op=LOAD Jan 19 13:05:32.439773 kernel: audit: type=1300 audit(1768827932.429:583): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.429000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.429000 audit: BPF prog-id=177 op=LOAD Jan 19 13:05:32.450909 kernel: audit: type=1327 audit(1768827932.429:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.451055 kernel: audit: type=1334 audit(1768827932.429:584): prog-id=177 op=LOAD Jan 19 13:05:32.429000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.460127 kernel: audit: type=1300 audit(1768827932.429:584): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.469920 kernel: audit: type=1327 audit(1768827932.429:584): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.429000 audit: BPF prog-id=177 op=UNLOAD Jan 19 13:05:32.480925 kernel: audit: type=1334 audit(1768827932.429:585): prog-id=177 op=UNLOAD Jan 19 13:05:32.429000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.488239 kernel: audit: type=1300 audit(1768827932.429:585): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.488349 kernel: audit: type=1327 audit(1768827932.429:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.495408 kernel: audit: type=1334 audit(1768827932.429:586): prog-id=176 op=UNLOAD Jan 19 13:05:32.429000 audit: BPF prog-id=176 op=UNLOAD Jan 19 13:05:32.429000 audit[3955]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.429000 audit: BPF prog-id=178 op=LOAD Jan 19 13:05:32.429000 audit[3955]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3431 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:32.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432343435376534616231663930626264343932313637316161306231 Jan 19 13:05:32.506331 containerd[1646]: time="2026-01-19T13:05:32.506240325Z" level=info msg="StartContainer for \"d24457e4ab1f90bbd4921671aa0b14d4e801bced42d20113ffb032a2311cca8d\" returns successfully" Jan 19 13:05:32.951195 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 19 13:05:32.951479 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 19 13:05:33.094715 containerd[1646]: time="2026-01-19T13:05:33.094647941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647dcfb6b6-s9g8c,Uid:97d7ac57-4a41-4014-aca9-d536c8dde83e,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:33.096110 containerd[1646]: time="2026-01-19T13:05:33.095578046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:33.098184 containerd[1646]: time="2026-01-19T13:05:33.098126084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:33.391089 containerd[1646]: time="2026-01-19T13:05:33.390996351Z" level=error msg="Failed to destroy network for sandbox \"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.395085 containerd[1646]: time="2026-01-19T13:05:33.394344316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647dcfb6b6-s9g8c,Uid:97d7ac57-4a41-4014-aca9-d536c8dde83e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.398374 systemd[1]: run-netns-cni\x2dbb746b2c\x2db0d6\x2d0c3a\x2dc071\x2d5407be39950f.mount: Deactivated successfully. Jan 19 13:05:33.403190 kubelet[2939]: E0119 13:05:33.395677 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.405106 kubelet[2939]: E0119 13:05:33.403802 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:33.405106 kubelet[2939]: E0119 13:05:33.404883 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647dcfb6b6-s9g8c" Jan 19 13:05:33.405595 kubelet[2939]: E0119 13:05:33.405301 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-647dcfb6b6-s9g8c_calico-system(97d7ac57-4a41-4014-aca9-d536c8dde83e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-647dcfb6b6-s9g8c_calico-system(97d7ac57-4a41-4014-aca9-d536c8dde83e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"624a2d411510f8f4c6368fb3950416c4aefa0e661076a2ec03ea57eb550bae49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-647dcfb6b6-s9g8c" podUID="97d7ac57-4a41-4014-aca9-d536c8dde83e" Jan 19 13:05:33.429899 containerd[1646]: time="2026-01-19T13:05:33.429839158Z" level=error msg="Failed to destroy network for sandbox \"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.441509 containerd[1646]: time="2026-01-19T13:05:33.441339384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.444072 kubelet[2939]: E0119 13:05:33.441750 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.444072 kubelet[2939]: E0119 13:05:33.441865 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:33.444072 kubelet[2939]: E0119 13:05:33.441904 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" Jan 19 13:05:33.444260 kubelet[2939]: E0119 13:05:33.441993 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daa21bf36d99d68b98e6b722b376d2a575b0e587e8a37988c2707ec37bbfeccd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:05:33.470650 containerd[1646]: time="2026-01-19T13:05:33.470552335Z" level=error msg="Failed to destroy network for sandbox \"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.482092 containerd[1646]: time="2026-01-19T13:05:33.481635965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.483133 kubelet[2939]: E0119 13:05:33.483067 2939 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 19 13:05:33.483755 kubelet[2939]: E0119 13:05:33.483315 2939 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:33.483755 kubelet[2939]: E0119 13:05:33.483371 2939 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" Jan 19 13:05:33.483755 kubelet[2939]: E0119 13:05:33.483447 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acf6d2fe904eed379e78aec49d67f1786e726139f4298f1455f1db32de19c405\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:05:33.649538 kubelet[2939]: I0119 13:05:33.648382 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b7wkm" podStartSLOduration=2.385392801 podStartE2EDuration="29.647669959s" podCreationTimestamp="2026-01-19 13:05:04 +0000 UTC" firstStartedPulling="2026-01-19 13:05:04.683444847 +0000 UTC m=+26.836339459" lastFinishedPulling="2026-01-19 13:05:31.945721995 +0000 UTC m=+54.098616617" observedRunningTime="2026-01-19 13:05:33.601500971 +0000 UTC m=+55.754395610" watchObservedRunningTime="2026-01-19 13:05:33.647669959 +0000 UTC m=+55.800564583" Jan 19 13:05:33.759473 systemd[1]: run-netns-cni\x2d466f932a\x2d21d9\x2d0af2\x2d409d\x2d15600670a238.mount: Deactivated successfully. Jan 19 13:05:33.759984 systemd[1]: run-netns-cni\x2dabb14428\x2d64c3\x2d0bfd\x2dd2a3\x2df34e6ceb58bb.mount: Deactivated successfully. Jan 19 13:05:33.800536 kubelet[2939]: I0119 13:05:33.800073 2939 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zhvj\" (UniqueName: \"kubernetes.io/projected/97d7ac57-4a41-4014-aca9-d536c8dde83e-kube-api-access-9zhvj\") pod \"97d7ac57-4a41-4014-aca9-d536c8dde83e\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " Jan 19 13:05:33.800536 kubelet[2939]: I0119 13:05:33.800148 2939 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-backend-key-pair\") pod \"97d7ac57-4a41-4014-aca9-d536c8dde83e\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " Jan 19 13:05:33.800536 kubelet[2939]: I0119 13:05:33.800198 2939 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-ca-bundle\") pod \"97d7ac57-4a41-4014-aca9-d536c8dde83e\" (UID: \"97d7ac57-4a41-4014-aca9-d536c8dde83e\") " Jan 19 13:05:33.802914 kubelet[2939]: I0119 13:05:33.802767 2939 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "97d7ac57-4a41-4014-aca9-d536c8dde83e" (UID: "97d7ac57-4a41-4014-aca9-d536c8dde83e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 19 13:05:33.813718 systemd[1]: var-lib-kubelet-pods-97d7ac57\x2d4a41\x2d4014\x2daca9\x2dd536c8dde83e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 19 13:05:33.815031 kubelet[2939]: I0119 13:05:33.814983 2939 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "97d7ac57-4a41-4014-aca9-d536c8dde83e" (UID: "97d7ac57-4a41-4014-aca9-d536c8dde83e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 19 13:05:33.818152 kubelet[2939]: I0119 13:05:33.818116 2939 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d7ac57-4a41-4014-aca9-d536c8dde83e-kube-api-access-9zhvj" (OuterVolumeSpecName: "kube-api-access-9zhvj") pod "97d7ac57-4a41-4014-aca9-d536c8dde83e" (UID: "97d7ac57-4a41-4014-aca9-d536c8dde83e"). InnerVolumeSpecName "kube-api-access-9zhvj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 19 13:05:33.823858 systemd[1]: var-lib-kubelet-pods-97d7ac57\x2d4a41\x2d4014\x2daca9\x2dd536c8dde83e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9zhvj.mount: Deactivated successfully. Jan 19 13:05:33.900929 kubelet[2939]: I0119 13:05:33.900755 2939 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-ca-bundle\") on node \"srv-hsmf0.gb1.brightbox.com\" DevicePath \"\"" Jan 19 13:05:33.901152 kubelet[2939]: I0119 13:05:33.901113 2939 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zhvj\" (UniqueName: \"kubernetes.io/projected/97d7ac57-4a41-4014-aca9-d536c8dde83e-kube-api-access-9zhvj\") on node \"srv-hsmf0.gb1.brightbox.com\" DevicePath \"\"" Jan 19 13:05:33.901152 kubelet[2939]: I0119 13:05:33.901150 2939 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d7ac57-4a41-4014-aca9-d536c8dde83e-whisker-backend-key-pair\") on node \"srv-hsmf0.gb1.brightbox.com\" DevicePath \"\"" Jan 19 13:05:34.095037 containerd[1646]: time="2026-01-19T13:05:34.094986445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-stgcr,Uid:d6dd96a7-8506-4053-b6ee-72e24f4dfadf,Namespace:kube-system,Attempt:0,}" Jan 19 13:05:34.096160 containerd[1646]: time="2026-01-19T13:05:34.095899277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nlspt,Uid:2f32ab2a-e7b2-4a72-8b17-d785aad340e2,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:34.174665 systemd[1]: Removed slice kubepods-besteffort-pod97d7ac57_4a41_4014_aca9_d536c8dde83e.slice - libcontainer container kubepods-besteffort-pod97d7ac57_4a41_4014_aca9_d536c8dde83e.slice. Jan 19 13:05:34.628266 systemd-networkd[1557]: cali9ab983c47f6: Link UP Jan 19 13:05:34.630593 systemd-networkd[1557]: cali9ab983c47f6: Gained carrier Jan 19 13:05:34.667742 containerd[1646]: 2026-01-19 13:05:34.197 [INFO][4130] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 13:05:34.667742 containerd[1646]: 2026-01-19 13:05:34.249 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0 coredns-668d6bf9bc- kube-system d6dd96a7-8506-4053-b6ee-72e24f4dfadf 851 0 2026-01-19 13:04:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com coredns-668d6bf9bc-stgcr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ab983c47f6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-" Jan 19 13:05:34.667742 containerd[1646]: 2026-01-19 13:05:34.250 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.667742 containerd[1646]: 2026-01-19 13:05:34.478 [INFO][4151] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" HandleID="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.480 [INFO][4151] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" HandleID="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c71e0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-stgcr", "timestamp":"2026-01-19 13:05:34.478792717 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.480 [INFO][4151] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.480 [INFO][4151] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.481 [INFO][4151] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.498 [INFO][4151] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.517 [INFO][4151] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.527 [INFO][4151] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.534 [INFO][4151] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.668243 containerd[1646]: 2026-01-19 13:05:34.542 [INFO][4151] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.544 [INFO][4151] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.550 [INFO][4151] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0 Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.562 [INFO][4151] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.577 [INFO][4151] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.193/26] block=192.168.35.192/26 handle="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.579 [INFO][4151] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.193/26] handle="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.581 [INFO][4151] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:34.671040 containerd[1646]: 2026-01-19 13:05:34.581 [INFO][4151] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.193/26] IPv6=[] ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" HandleID="k8s-pod-network.34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.592 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d6dd96a7-8506-4053-b6ee-72e24f4dfadf", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-stgcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ab983c47f6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.592 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.193/32] ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.593 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ab983c47f6 ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.633 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.633 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d6dd96a7-8506-4053-b6ee-72e24f4dfadf", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0", Pod:"coredns-668d6bf9bc-stgcr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ab983c47f6", MAC:"26:64:90:16:be:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:34.671367 containerd[1646]: 2026-01-19 13:05:34.656 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-stgcr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--stgcr-eth0" Jan 19 13:05:34.817842 kubelet[2939]: I0119 13:05:34.816776 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7526f50b-859a-4390-ae5e-37e152f03638-whisker-ca-bundle\") pod \"whisker-795b45d676-wvh8b\" (UID: \"7526f50b-859a-4390-ae5e-37e152f03638\") " pod="calico-system/whisker-795b45d676-wvh8b" Jan 19 13:05:34.819460 kubelet[2939]: I0119 13:05:34.818931 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7526f50b-859a-4390-ae5e-37e152f03638-whisker-backend-key-pair\") pod \"whisker-795b45d676-wvh8b\" (UID: \"7526f50b-859a-4390-ae5e-37e152f03638\") " pod="calico-system/whisker-795b45d676-wvh8b" Jan 19 13:05:34.819460 kubelet[2939]: I0119 13:05:34.818973 2939 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6h4\" (UniqueName: \"kubernetes.io/projected/7526f50b-859a-4390-ae5e-37e152f03638-kube-api-access-nq6h4\") pod \"whisker-795b45d676-wvh8b\" (UID: \"7526f50b-859a-4390-ae5e-37e152f03638\") " pod="calico-system/whisker-795b45d676-wvh8b" Jan 19 13:05:34.823918 systemd[1]: Created slice kubepods-besteffort-pod7526f50b_859a_4390_ae5e_37e152f03638.slice - libcontainer container kubepods-besteffort-pod7526f50b_859a_4390_ae5e_37e152f03638.slice. Jan 19 13:05:34.853969 systemd-networkd[1557]: caliee3acab80c1: Link UP Jan 19 13:05:34.860892 systemd-networkd[1557]: caliee3acab80c1: Gained carrier Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.210 [INFO][4124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.249 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0 goldmane-666569f655- calico-system 2f32ab2a-e7b2-4a72-8b17-d785aad340e2 856 0 2026-01-19 13:05:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com goldmane-666569f655-nlspt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliee3acab80c1 [] [] }} ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.249 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.478 [INFO][4153] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" HandleID="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Workload="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.483 [INFO][4153] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" HandleID="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Workload="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001023b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"goldmane-666569f655-nlspt", "timestamp":"2026-01-19 13:05:34.478787662 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.483 [INFO][4153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.584 [INFO][4153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.585 [INFO][4153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.617 [INFO][4153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.659 [INFO][4153] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.705 [INFO][4153] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.717 [INFO][4153] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.738 [INFO][4153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.738 [INFO][4153] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.749 [INFO][4153] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.769 [INFO][4153] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.815 [INFO][4153] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.194/26] block=192.168.35.192/26 handle="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.815 [INFO][4153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.194/26] handle="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.816 [INFO][4153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:34.898007 containerd[1646]: 2026-01-19 13:05:34.817 [INFO][4153] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.194/26] IPv6=[] ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" HandleID="k8s-pod-network.8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Workload="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.834 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2f32ab2a-e7b2-4a72-8b17-d785aad340e2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-nlspt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee3acab80c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.834 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.194/32] ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.834 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee3acab80c1 ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.865 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.865 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2f32ab2a-e7b2-4a72-8b17-d785aad340e2", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c", Pod:"goldmane-666569f655-nlspt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee3acab80c1", MAC:"a2:fd:f2:6f:c2:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:34.899054 containerd[1646]: 2026-01-19 13:05:34.887 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" Namespace="calico-system" Pod="goldmane-666569f655-nlspt" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-goldmane--666569f655--nlspt-eth0" Jan 19 13:05:35.016930 containerd[1646]: time="2026-01-19T13:05:35.016493324Z" level=info msg="connecting to shim 34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0" address="unix:///run/containerd/s/293f287fcafb6d3b62b2855752a8baa907e3e8862101444436aa8c1fe3dc292b" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:35.059048 containerd[1646]: time="2026-01-19T13:05:35.058981399Z" level=info msg="connecting to shim 8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c" address="unix:///run/containerd/s/7a95440065cd3bbbbdb3e9059afd6ce0e45ed1c4342bae3fd6635a37b04fde58" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:35.080594 systemd[1]: Started cri-containerd-34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0.scope - libcontainer container 34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0. Jan 19 13:05:35.095064 containerd[1646]: time="2026-01-19T13:05:35.095001535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzctk,Uid:9533a5f4-a04a-442d-b08c-488e8c9d1e7c,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:35.125000 audit: BPF prog-id=179 op=LOAD Jan 19 13:05:35.127000 audit: BPF prog-id=180 op=LOAD Jan 19 13:05:35.127000 audit[4227]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.128000 audit: BPF prog-id=180 op=UNLOAD Jan 19 13:05:35.128000 audit[4227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.129000 audit: BPF prog-id=181 op=LOAD Jan 19 13:05:35.129000 audit[4227]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.131000 audit: BPF prog-id=182 op=LOAD Jan 19 13:05:35.131000 audit[4227]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.132000 audit: BPF prog-id=182 op=UNLOAD Jan 19 13:05:35.132000 audit[4227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.134000 audit: BPF prog-id=181 op=UNLOAD Jan 19 13:05:35.134000 audit[4227]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.135000 audit: BPF prog-id=183 op=LOAD Jan 19 13:05:35.135000 audit[4227]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4216 pid=4227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334663139643562376537613936326436623130636531653436653738 Jan 19 13:05:35.155184 systemd[1]: Started cri-containerd-8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c.scope - libcontainer container 8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c. Jan 19 13:05:35.167176 containerd[1646]: time="2026-01-19T13:05:35.167112501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795b45d676-wvh8b,Uid:7526f50b-859a-4390-ae5e-37e152f03638,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:35.212000 audit: BPF prog-id=184 op=LOAD Jan 19 13:05:35.216000 audit: BPF prog-id=185 op=LOAD Jan 19 13:05:35.216000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.216000 audit: BPF prog-id=185 op=UNLOAD Jan 19 13:05:35.216000 audit[4259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.220000 audit: BPF prog-id=186 op=LOAD Jan 19 13:05:35.220000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.220000 audit: BPF prog-id=187 op=LOAD Jan 19 13:05:35.220000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.220000 audit: BPF prog-id=187 op=UNLOAD Jan 19 13:05:35.220000 audit[4259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.220000 audit: BPF prog-id=186 op=UNLOAD Jan 19 13:05:35.220000 audit[4259]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.220000 audit: BPF prog-id=188 op=LOAD Jan 19 13:05:35.220000 audit[4259]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=4241 pid=4259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663961303930633366386637363866656332616532633463343535 Jan 19 13:05:35.293316 containerd[1646]: time="2026-01-19T13:05:35.292623744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-stgcr,Uid:d6dd96a7-8506-4053-b6ee-72e24f4dfadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0\"" Jan 19 13:05:35.352849 containerd[1646]: time="2026-01-19T13:05:35.352617063Z" level=info msg="CreateContainer within sandbox \"34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 13:05:35.434447 containerd[1646]: time="2026-01-19T13:05:35.434295505Z" level=info msg="Container d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:35.455433 containerd[1646]: time="2026-01-19T13:05:35.454980023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-nlspt,Uid:2f32ab2a-e7b2-4a72-8b17-d785aad340e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cf9a090c3f8f768fec2ae2c4c455225644fe68c69fde3ea156699fb5f048c6c\"" Jan 19 13:05:35.470303 containerd[1646]: time="2026-01-19T13:05:35.469160086Z" level=info msg="CreateContainer within sandbox \"34f19d5b7e7a962d6b10ce1e46e78523aa79055c0807f8b63d30bb7354ba93f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309\"" Jan 19 13:05:35.489951 containerd[1646]: time="2026-01-19T13:05:35.489221422Z" level=info msg="StartContainer for \"d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309\"" Jan 19 13:05:35.502554 containerd[1646]: time="2026-01-19T13:05:35.501896592Z" level=info msg="connecting to shim d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309" address="unix:///run/containerd/s/293f287fcafb6d3b62b2855752a8baa907e3e8862101444436aa8c1fe3dc292b" protocol=ttrpc version=3 Jan 19 13:05:35.513842 containerd[1646]: time="2026-01-19T13:05:35.513341780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 13:05:35.658222 systemd-networkd[1557]: cali5dc67e7c6a1: Link UP Jan 19 13:05:35.665596 systemd-networkd[1557]: cali5dc67e7c6a1: Gained carrier Jan 19 13:05:35.694468 systemd[1]: Started cri-containerd-d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309.scope - libcontainer container d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309. Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.255 [INFO][4277] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.297 [INFO][4277] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0 csi-node-driver- calico-system 9533a5f4-a04a-442d-b08c-488e8c9d1e7c 728 0 2026-01-19 13:05:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com csi-node-driver-tzctk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5dc67e7c6a1 [] [] }} ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.297 [INFO][4277] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.481 [INFO][4328] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" HandleID="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Workload="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.484 [INFO][4328] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" HandleID="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Workload="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fe20), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"csi-node-driver-tzctk", "timestamp":"2026-01-19 13:05:35.481839814 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.486 [INFO][4328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.486 [INFO][4328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.487 [INFO][4328] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.521 [INFO][4328] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.551 [INFO][4328] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.590 [INFO][4328] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.596 [INFO][4328] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.602 [INFO][4328] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.602 [INFO][4328] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.610 [INFO][4328] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.618 [INFO][4328] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.632 [INFO][4328] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.195/26] block=192.168.35.192/26 handle="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.632 [INFO][4328] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.195/26] handle="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.632 [INFO][4328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:35.711179 containerd[1646]: 2026-01-19 13:05:35.633 [INFO][4328] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.195/26] IPv6=[] ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" HandleID="k8s-pod-network.8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Workload="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.642 [INFO][4277] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9533a5f4-a04a-442d-b08c-488e8c9d1e7c", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-tzctk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5dc67e7c6a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.643 [INFO][4277] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.195/32] ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.643 [INFO][4277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5dc67e7c6a1 ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.671 [INFO][4277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.674 [INFO][4277] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9533a5f4-a04a-442d-b08c-488e8c9d1e7c", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf", Pod:"csi-node-driver-tzctk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5dc67e7c6a1", MAC:"d2:f6:5f:75:84:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:35.713783 containerd[1646]: 2026-01-19 13:05:35.704 [INFO][4277] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" Namespace="calico-system" Pod="csi-node-driver-tzctk" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-csi--node--driver--tzctk-eth0" Jan 19 13:05:35.755000 audit: BPF prog-id=189 op=LOAD Jan 19 13:05:35.757000 audit: BPF prog-id=190 op=LOAD Jan 19 13:05:35.757000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=190 op=UNLOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=191 op=LOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=192 op=LOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=192 op=UNLOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=191 op=UNLOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.760000 audit: BPF prog-id=193 op=LOAD Jan 19 13:05:35.760000 audit[4374]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4216 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:35.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437303232383561663962316231393331666463386130346632353432 Jan 19 13:05:35.790993 systemd-networkd[1557]: cali2e6c8d0fe42: Link UP Jan 19 13:05:35.803271 systemd-networkd[1557]: cali2e6c8d0fe42: Gained carrier Jan 19 13:05:35.851430 containerd[1646]: time="2026-01-19T13:05:35.851192978Z" level=info msg="connecting to shim 8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf" address="unix:///run/containerd/s/0b7cdc2e78dffea7f37e11701f6c1860ba8fadc20bba6716267b74ea4460643e" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.360 [INFO][4293] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.425 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0 whisker-795b45d676- calico-system 7526f50b-859a-4390-ae5e-37e152f03638 946 0 2026-01-19 13:05:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:795b45d676 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com whisker-795b45d676-wvh8b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e6c8d0fe42 [] [] }} ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.425 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.613 [INFO][4363] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" HandleID="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Workload="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.613 [INFO][4363] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" HandleID="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Workload="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397990), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"whisker-795b45d676-wvh8b", "timestamp":"2026-01-19 13:05:35.613265256 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.613 [INFO][4363] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.635 [INFO][4363] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.636 [INFO][4363] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.662 [INFO][4363] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.678 [INFO][4363] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.722 [INFO][4363] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.728 [INFO][4363] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.734 [INFO][4363] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.735 [INFO][4363] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.741 [INFO][4363] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1 Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.754 [INFO][4363] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.769 [INFO][4363] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.196/26] block=192.168.35.192/26 handle="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.769 [INFO][4363] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.196/26] handle="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.769 [INFO][4363] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:35.877698 containerd[1646]: 2026-01-19 13:05:35.769 [INFO][4363] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.196/26] IPv6=[] ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" HandleID="k8s-pod-network.c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Workload="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.774 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0", GenerateName:"whisker-795b45d676-", Namespace:"calico-system", SelfLink:"", UID:"7526f50b-859a-4390-ae5e-37e152f03638", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795b45d676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"whisker-795b45d676-wvh8b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e6c8d0fe42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.775 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.196/32] ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.775 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e6c8d0fe42 ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.829 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.835 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0", GenerateName:"whisker-795b45d676-", Namespace:"calico-system", SelfLink:"", UID:"7526f50b-859a-4390-ae5e-37e152f03638", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"795b45d676", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1", Pod:"whisker-795b45d676-wvh8b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e6c8d0fe42", MAC:"1a:c5:ea:34:cc:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:35.881677 containerd[1646]: 2026-01-19 13:05:35.863 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" Namespace="calico-system" Pod="whisker-795b45d676-wvh8b" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-whisker--795b45d676--wvh8b-eth0" Jan 19 13:05:35.909553 containerd[1646]: time="2026-01-19T13:05:35.908650882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:35.917760 containerd[1646]: time="2026-01-19T13:05:35.917617557Z" level=info msg="StartContainer for \"d702285af9b1b1931fdc8a04f254261dce3a2de7a46902fa822c2c80dd012309\" returns successfully" Jan 19 13:05:35.920091 containerd[1646]: time="2026-01-19T13:05:35.919473104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 13:05:35.923284 containerd[1646]: time="2026-01-19T13:05:35.920305193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:35.926951 kubelet[2939]: E0119 13:05:35.926133 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:05:35.926951 kubelet[2939]: E0119 13:05:35.926207 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:05:35.966788 kubelet[2939]: E0119 13:05:35.964073 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh5hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:35.990098 kubelet[2939]: E0119 13:05:35.989761 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:05:36.003518 containerd[1646]: time="2026-01-19T13:05:36.003179865Z" level=info msg="connecting to shim c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1" address="unix:///run/containerd/s/6fd1050a82d45e13deb90bacf6ae6b5fbba4e8cb6d1c8759fcb5e417b44a6a44" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:36.009641 systemd[1]: Started cri-containerd-8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf.scope - libcontainer container 8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf. Jan 19 13:05:36.084142 systemd[1]: Started cri-containerd-c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1.scope - libcontainer container c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1. Jan 19 13:05:36.095000 audit: BPF prog-id=194 op=LOAD Jan 19 13:05:36.097000 audit: BPF prog-id=195 op=LOAD Jan 19 13:05:36.097000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.097000 audit: BPF prog-id=195 op=UNLOAD Jan 19 13:05:36.097000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.098000 audit: BPF prog-id=196 op=LOAD Jan 19 13:05:36.098000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.098000 audit: BPF prog-id=197 op=LOAD Jan 19 13:05:36.098000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.098000 audit: BPF prog-id=197 op=UNLOAD Jan 19 13:05:36.098000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.098000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.099000 audit: BPF prog-id=196 op=UNLOAD Jan 19 13:05:36.099000 audit[4493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.099000 audit: BPF prog-id=198 op=LOAD Jan 19 13:05:36.099000 audit[4493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4463 pid=4493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3864636563336134326232313664656330636534393031643566616331 Jan 19 13:05:36.110541 kubelet[2939]: I0119 13:05:36.110492 2939 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d7ac57-4a41-4014-aca9-d536c8dde83e" path="/var/lib/kubelet/pods/97d7ac57-4a41-4014-aca9-d536c8dde83e/volumes" Jan 19 13:05:36.170025 containerd[1646]: time="2026-01-19T13:05:36.169971408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tzctk,Uid:9533a5f4-a04a-442d-b08c-488e8c9d1e7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dcec3a42b216dec0ce4901d5fac14ca1de0153e9214029e6e878c48e49d0cbf\"" Jan 19 13:05:36.174509 containerd[1646]: time="2026-01-19T13:05:36.174369494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 13:05:36.186708 systemd-networkd[1557]: caliee3acab80c1: Gained IPv6LL Jan 19 13:05:36.192000 audit: BPF prog-id=199 op=LOAD Jan 19 13:05:36.194000 audit: BPF prog-id=200 op=LOAD Jan 19 13:05:36.194000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.194000 audit: BPF prog-id=200 op=UNLOAD Jan 19 13:05:36.194000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.194000 audit: BPF prog-id=201 op=LOAD Jan 19 13:05:36.194000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.194000 audit: BPF prog-id=202 op=LOAD Jan 19 13:05:36.194000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.195000 audit: BPF prog-id=202 op=UNLOAD Jan 19 13:05:36.195000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.195000 audit: BPF prog-id=201 op=UNLOAD Jan 19 13:05:36.195000 audit[4531]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.195000 audit: BPF prog-id=203 op=LOAD Jan 19 13:05:36.195000 audit[4531]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4520 pid=4531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338363734616632653265626434393666333561663839646465343166 Jan 19 13:05:36.388769 containerd[1646]: time="2026-01-19T13:05:36.384737680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795b45d676-wvh8b,Uid:7526f50b-859a-4390-ae5e-37e152f03638,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8674af2e2ebd496f35af89dde41f54a6af1634c970c6f030faa94edf490e5a1\"" Jan 19 13:05:36.516720 containerd[1646]: time="2026-01-19T13:05:36.516338916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:36.520124 containerd[1646]: time="2026-01-19T13:05:36.519995109Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 13:05:36.520314 containerd[1646]: time="2026-01-19T13:05:36.520056234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:36.527015 kubelet[2939]: E0119 13:05:36.526945 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:05:36.527180 kubelet[2939]: E0119 13:05:36.527036 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:05:36.530293 containerd[1646]: time="2026-01-19T13:05:36.530204582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 13:05:36.532440 kubelet[2939]: E0119 13:05:36.532137 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:36.679618 kubelet[2939]: E0119 13:05:36.678910 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:05:36.693147 systemd-networkd[1557]: cali5dc67e7c6a1: Gained IPv6LL Jan 19 13:05:36.695500 systemd-networkd[1557]: cali9ab983c47f6: Gained IPv6LL Jan 19 13:05:36.732388 kubelet[2939]: I0119 13:05:36.711932 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-stgcr" podStartSLOduration=52.711905826 podStartE2EDuration="52.711905826s" podCreationTimestamp="2026-01-19 13:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:05:36.691108573 +0000 UTC m=+58.844003210" watchObservedRunningTime="2026-01-19 13:05:36.711905826 +0000 UTC m=+58.864800450" Jan 19 13:05:36.835000 audit[4597]: NETFILTER_CFG table=filter:119 family=2 entries=20 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:36.835000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc499c9f0 a2=0 a3=7fffc499c9dc items=0 ppid=3085 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.835000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:36.842000 audit[4597]: NETFILTER_CFG table=nat:120 family=2 entries=14 op=nft_register_rule pid=4597 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:36.842000 audit[4597]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffc499c9f0 a2=0 a3=0 items=0 ppid=3085 pid=4597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:36.849841 containerd[1646]: time="2026-01-19T13:05:36.849618441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:36.850711 containerd[1646]: time="2026-01-19T13:05:36.850666806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 13:05:36.851073 containerd[1646]: time="2026-01-19T13:05:36.851041932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:36.851614 kubelet[2939]: E0119 13:05:36.851545 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:05:36.851703 kubelet[2939]: E0119 13:05:36.851607 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:05:36.852196 containerd[1646]: time="2026-01-19T13:05:36.852145776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 13:05:36.852977 kubelet[2939]: E0119 13:05:36.852702 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:818d65b8ce8c49218c255fe0e28b9b06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:36.916000 audit[4617]: NETFILTER_CFG table=filter:121 family=2 entries=17 op=nft_register_rule pid=4617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:36.916000 audit[4617]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc3a40f070 a2=0 a3=7ffc3a40f05c items=0 ppid=3085 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.916000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:36.920000 audit: BPF prog-id=204 op=LOAD Jan 19 13:05:36.920000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62f62160 a2=98 a3=1fffffffffffffff items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.920000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.921000 audit: BPF prog-id=204 op=UNLOAD Jan 19 13:05:36.921000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd62f62130 a3=0 items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.921000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.922000 audit: BPF prog-id=205 op=LOAD Jan 19 13:05:36.922000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62f62040 a2=94 a3=3 items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.922000 audit: BPF prog-id=205 op=UNLOAD Jan 19 13:05:36.922000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62f62040 a2=94 a3=3 items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.922000 audit[4617]: NETFILTER_CFG table=nat:122 family=2 entries=35 op=nft_register_chain pid=4617 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:36.922000 audit[4617]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc3a40f070 a2=0 a3=7ffc3a40f05c items=0 ppid=3085 pid=4617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:36.922000 audit: BPF prog-id=206 op=LOAD Jan 19 13:05:36.922000 audit[4620]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd62f62080 a2=94 a3=7ffd62f62260 items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.922000 audit: BPF prog-id=206 op=UNLOAD Jan 19 13:05:36.922000 audit[4620]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd62f62080 a2=94 a3=7ffd62f62260 items=0 ppid=4321 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.922000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 19 13:05:36.927000 audit: BPF prog-id=207 op=LOAD Jan 19 13:05:36.927000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd80020730 a2=98 a3=3 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.927000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:36.928000 audit: BPF prog-id=207 op=UNLOAD Jan 19 13:05:36.928000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd80020700 a3=0 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.928000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:36.932000 audit: BPF prog-id=208 op=LOAD Jan 19 13:05:36.932000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd80020520 a2=94 a3=54428f items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:36.932000 audit: BPF prog-id=208 op=UNLOAD Jan 19 13:05:36.932000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd80020520 a2=94 a3=54428f items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:36.932000 audit: BPF prog-id=209 op=LOAD Jan 19 13:05:36.932000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd80020550 a2=94 a3=2 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:36.932000 audit: BPF prog-id=209 op=UNLOAD Jan 19 13:05:36.932000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd80020550 a2=0 a3=2 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:36.932000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.177350 containerd[1646]: time="2026-01-19T13:05:37.177010712Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:37.179084 containerd[1646]: time="2026-01-19T13:05:37.179020175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 13:05:37.179505 containerd[1646]: time="2026-01-19T13:05:37.179212700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:37.180182 kubelet[2939]: E0119 13:05:37.180041 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:05:37.182544 kubelet[2939]: E0119 13:05:37.180150 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:05:37.182544 kubelet[2939]: E0119 13:05:37.181022 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:37.183064 kubelet[2939]: E0119 13:05:37.183014 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:37.183989 containerd[1646]: time="2026-01-19T13:05:37.183745903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 13:05:37.184000 audit: BPF prog-id=210 op=LOAD Jan 19 13:05:37.184000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd80020410 a2=94 a3=1 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.184000 audit: BPF prog-id=210 op=UNLOAD Jan 19 13:05:37.184000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd80020410 a2=94 a3=1 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.201000 audit: BPF prog-id=211 op=LOAD Jan 19 13:05:37.201000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd80020400 a2=94 a3=4 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.201000 audit: BPF prog-id=211 op=UNLOAD Jan 19 13:05:37.201000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd80020400 a2=0 a3=4 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.201000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.202000 audit: BPF prog-id=212 op=LOAD Jan 19 13:05:37.202000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd80020260 a2=94 a3=5 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.202000 audit: BPF prog-id=212 op=UNLOAD Jan 19 13:05:37.202000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd80020260 a2=0 a3=5 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.202000 audit: BPF prog-id=213 op=LOAD Jan 19 13:05:37.202000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd80020480 a2=94 a3=6 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.202000 audit: BPF prog-id=213 op=UNLOAD Jan 19 13:05:37.202000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd80020480 a2=0 a3=6 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.202000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.203000 audit: BPF prog-id=214 op=LOAD Jan 19 13:05:37.203000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8001fc30 a2=94 a3=88 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.203000 audit: BPF prog-id=215 op=LOAD Jan 19 13:05:37.203000 audit[4621]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8001fab0 a2=94 a3=2 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.203000 audit: BPF prog-id=215 op=UNLOAD Jan 19 13:05:37.203000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8001fae0 a2=0 a3=7ffd8001fbe0 items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.203000 audit: BPF prog-id=214 op=UNLOAD Jan 19 13:05:37.203000 audit[4621]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2c6bd10 a2=0 a3=8c4c4bc3374a20f items=0 ppid=4321 pid=4621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.203000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 19 13:05:37.242000 audit: BPF prog-id=216 op=LOAD Jan 19 13:05:37.242000 audit[4624]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff55ca2190 a2=98 a3=1999999999999999 items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.242000 audit: BPF prog-id=216 op=UNLOAD Jan 19 13:05:37.242000 audit[4624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff55ca2160 a3=0 items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.242000 audit: BPF prog-id=217 op=LOAD Jan 19 13:05:37.242000 audit[4624]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff55ca2070 a2=94 a3=ffff items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.242000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.243000 audit: BPF prog-id=217 op=UNLOAD Jan 19 13:05:37.243000 audit[4624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff55ca2070 a2=94 a3=ffff items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.243000 audit: BPF prog-id=218 op=LOAD Jan 19 13:05:37.243000 audit[4624]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff55ca20b0 a2=94 a3=7fff55ca2290 items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.243000 audit: BPF prog-id=218 op=UNLOAD Jan 19 13:05:37.243000 audit[4624]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff55ca20b0 a2=94 a3=7fff55ca2290 items=0 ppid=4321 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.243000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 19 13:05:37.356211 systemd-networkd[1557]: vxlan.calico: Link UP Jan 19 13:05:37.356227 systemd-networkd[1557]: vxlan.calico: Gained carrier Jan 19 13:05:37.403000 audit: BPF prog-id=219 op=LOAD Jan 19 13:05:37.403000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6cfce4d0 a2=98 a3=20 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.403000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.404000 audit: BPF prog-id=219 op=UNLOAD Jan 19 13:05:37.404000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc6cfce4a0 a3=0 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.404000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.408000 audit: BPF prog-id=220 op=LOAD Jan 19 13:05:37.408000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6cfce2e0 a2=94 a3=54428f items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.408000 audit: BPF prog-id=220 op=UNLOAD Jan 19 13:05:37.408000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6cfce2e0 a2=94 a3=54428f items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.408000 audit: BPF prog-id=221 op=LOAD Jan 19 13:05:37.408000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6cfce310 a2=94 a3=2 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.408000 audit: BPF prog-id=221 op=UNLOAD Jan 19 13:05:37.408000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc6cfce310 a2=0 a3=2 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.408000 audit: BPF prog-id=222 op=LOAD Jan 19 13:05:37.408000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6cfce0c0 a2=94 a3=4 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.408000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.409000 audit: BPF prog-id=222 op=UNLOAD Jan 19 13:05:37.409000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6cfce0c0 a2=94 a3=4 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.409000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.410000 audit: BPF prog-id=223 op=LOAD Jan 19 13:05:37.410000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6cfce1c0 a2=94 a3=7ffc6cfce340 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.410000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.412000 audit: BPF prog-id=223 op=UNLOAD Jan 19 13:05:37.412000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6cfce1c0 a2=0 a3=7ffc6cfce340 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.412000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.416000 audit: BPF prog-id=224 op=LOAD Jan 19 13:05:37.416000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6cfcd8f0 a2=94 a3=2 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.416000 audit: BPF prog-id=224 op=UNLOAD Jan 19 13:05:37.416000 audit[4653]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc6cfcd8f0 a2=0 a3=2 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.416000 audit: BPF prog-id=225 op=LOAD Jan 19 13:05:37.416000 audit[4653]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc6cfcd9f0 a2=94 a3=30 items=0 ppid=4321 pid=4653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.416000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 19 13:05:37.430000 audit: BPF prog-id=226 op=LOAD Jan 19 13:05:37.451558 kernel: kauditd_printk_skb: 257 callbacks suppressed Jan 19 13:05:37.451825 kernel: audit: type=1300 audit(1768827937.430:675): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc33749b0 a2=98 a3=0 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.430000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffcc33749b0 a2=98 a3=0 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.480972 kernel: audit: type=1327 audit(1768827937.430:675): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.430000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.448000 audit: BPF prog-id=226 op=UNLOAD Jan 19 13:05:37.489060 kernel: audit: type=1334 audit(1768827937.448:676): prog-id=226 op=UNLOAD Jan 19 13:05:37.489169 kernel: audit: type=1300 audit(1768827937.448:676): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc3374980 a3=0 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.448000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffcc3374980 a3=0 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.498008 kernel: audit: type=1327 audit(1768827937.448:676): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.448000 audit: BPF prog-id=227 op=LOAD Jan 19 13:05:37.448000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc33747a0 a2=94 a3=54428f items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.503685 kernel: audit: type=1334 audit(1768827937.448:677): prog-id=227 op=LOAD Jan 19 13:05:37.503759 kernel: audit: type=1300 audit(1768827937.448:677): arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc33747a0 a2=94 a3=54428f items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.508211 containerd[1646]: time="2026-01-19T13:05:37.508137097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:37.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.512425 containerd[1646]: time="2026-01-19T13:05:37.512329792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 13:05:37.512599 containerd[1646]: time="2026-01-19T13:05:37.512547391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:37.512911 kernel: audit: type=1327 audit(1768827937.448:677): proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.448000 audit: BPF prog-id=227 op=UNLOAD Jan 19 13:05:37.513612 kubelet[2939]: E0119 13:05:37.513464 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:05:37.513911 kubelet[2939]: E0119 13:05:37.513791 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:05:37.448000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc33747a0 a2=94 a3=54428f items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.514852 kubelet[2939]: E0119 13:05:37.514434 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:37.516431 kernel: audit: type=1334 audit(1768827937.448:678): prog-id=227 op=UNLOAD Jan 19 13:05:37.516953 kernel: audit: type=1300 audit(1768827937.448:678): arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc33747a0 a2=94 a3=54428f items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.517028 kubelet[2939]: E0119 13:05:37.516651 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:05:37.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.448000 audit: BPF prog-id=228 op=LOAD Jan 19 13:05:37.448000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc33747d0 a2=94 a3=2 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.448000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.449000 audit: BPF prog-id=228 op=UNLOAD Jan 19 13:05:37.449000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc33747d0 a2=0 a3=2 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.653940 systemd-networkd[1557]: cali2e6c8d0fe42: Gained IPv6LL Jan 19 13:05:37.693270 kubelet[2939]: E0119 13:05:37.693186 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:05:37.698645 kubelet[2939]: E0119 13:05:37.698567 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:37.765000 audit[4666]: NETFILTER_CFG table=filter:123 family=2 entries=14 op=nft_register_rule pid=4666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:37.765000 audit[4666]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe4b83d910 a2=0 a3=7ffe4b83d8fc items=0 ppid=3085 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.765000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:37.771000 audit[4666]: NETFILTER_CFG table=nat:124 family=2 entries=20 op=nft_register_rule pid=4666 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:37.771000 audit[4666]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe4b83d910 a2=0 a3=7ffe4b83d8fc items=0 ppid=3085 pid=4666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.771000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:37.819000 audit: BPF prog-id=229 op=LOAD Jan 19 13:05:37.819000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffcc3374690 a2=94 a3=1 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.819000 audit: BPF prog-id=229 op=UNLOAD Jan 19 13:05:37.819000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffcc3374690 a2=94 a3=1 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.819000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.835000 audit: BPF prog-id=230 op=LOAD Jan 19 13:05:37.835000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc3374680 a2=94 a3=4 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.835000 audit: BPF prog-id=230 op=UNLOAD Jan 19 13:05:37.835000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcc3374680 a2=0 a3=4 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.835000 audit: BPF prog-id=231 op=LOAD Jan 19 13:05:37.835000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffcc33744e0 a2=94 a3=5 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.835000 audit: BPF prog-id=231 op=UNLOAD Jan 19 13:05:37.835000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffcc33744e0 a2=0 a3=5 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.835000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.836000 audit: BPF prog-id=232 op=LOAD Jan 19 13:05:37.836000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc3374700 a2=94 a3=6 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.836000 audit: BPF prog-id=232 op=UNLOAD Jan 19 13:05:37.836000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffcc3374700 a2=0 a3=6 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.836000 audit: BPF prog-id=233 op=LOAD Jan 19 13:05:37.836000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffcc3373eb0 a2=94 a3=88 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.836000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.837000 audit: BPF prog-id=234 op=LOAD Jan 19 13:05:37.837000 audit[4662]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffcc3373d30 a2=94 a3=2 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.837000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.837000 audit: BPF prog-id=234 op=UNLOAD Jan 19 13:05:37.837000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffcc3373d60 a2=0 a3=7ffcc3373e60 items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.837000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.837000 audit: BPF prog-id=233 op=UNLOAD Jan 19 13:05:37.837000 audit[4662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=c5ccd10 a2=0 a3=bcc1f6b98c9a8a4e items=0 ppid=4321 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.837000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 19 13:05:37.851000 audit: BPF prog-id=225 op=UNLOAD Jan 19 13:05:37.851000 audit[4321]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0007e24c0 a2=0 a3=0 items=0 ppid=4303 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.851000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 19 13:05:37.968000 audit[4694]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=4694 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:37.968000 audit[4694]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff2adc8050 a2=0 a3=7fff2adc803c items=0 ppid=4321 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.968000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:37.970000 audit[4695]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=4695 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:37.970000 audit[4695]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc5636e8c0 a2=0 a3=7ffc5636e8ac items=0 ppid=4321 pid=4695 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.970000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:37.979000 audit[4693]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:37.979000 audit[4693]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd3c0d8c30 a2=0 a3=7ffd3c0d8c1c items=0 ppid=4321 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.979000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:37.994000 audit[4698]: NETFILTER_CFG table=filter:128 family=2 entries=192 op=nft_register_chain pid=4698 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:37.994000 audit[4698]: SYSCALL arch=c000003e syscall=46 success=yes exit=111724 a0=3 a1=7ffc420226a0 a2=0 a3=7ffc4202268c items=0 ppid=4321 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:37.994000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:39.317151 systemd-networkd[1557]: vxlan.calico: Gained IPv6LL Jan 19 13:05:43.094330 containerd[1646]: time="2026-01-19T13:05:43.094188720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:43.298521 systemd-networkd[1557]: cali48f4abf816c: Link UP Jan 19 13:05:43.299805 systemd-networkd[1557]: cali48f4abf816c: Gained carrier Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.167 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0 calico-apiserver-77bb946844- calico-apiserver ed22f491-3777-46a9-8e11-3aad3f6a2fdc 854 0 2026-01-19 13:04:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bb946844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com calico-apiserver-77bb946844-dx6t6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali48f4abf816c [] [] }} ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.168 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.226 [INFO][4730] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" HandleID="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.226 [INFO][4730] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" HandleID="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"calico-apiserver-77bb946844-dx6t6", "timestamp":"2026-01-19 13:05:43.226402588 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.226 [INFO][4730] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.226 [INFO][4730] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.227 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.240 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.249 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.259 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.263 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.267 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.267 [INFO][4730] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.270 [INFO][4730] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.278 [INFO][4730] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.288 [INFO][4730] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.197/26] block=192.168.35.192/26 handle="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.288 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.197/26] handle="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.289 [INFO][4730] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:43.323197 containerd[1646]: 2026-01-19 13:05:43.289 [INFO][4730] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.197/26] IPv6=[] ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" HandleID="k8s-pod-network.c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.293 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0", GenerateName:"calico-apiserver-77bb946844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed22f491-3777-46a9-8e11-3aad3f6a2fdc", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb946844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77bb946844-dx6t6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48f4abf816c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.293 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.197/32] ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.293 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48f4abf816c ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.299 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.301 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0", GenerateName:"calico-apiserver-77bb946844-", Namespace:"calico-apiserver", SelfLink:"", UID:"ed22f491-3777-46a9-8e11-3aad3f6a2fdc", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb946844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb", Pod:"calico-apiserver-77bb946844-dx6t6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali48f4abf816c", MAC:"02:01:f9:79:58:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:43.326027 containerd[1646]: 2026-01-19 13:05:43.315 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-dx6t6" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--dx6t6-eth0" Jan 19 13:05:43.376000 audit[4748]: NETFILTER_CFG table=filter:129 family=2 entries=62 op=nft_register_chain pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:43.387768 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 19 13:05:43.388037 kernel: audit: type=1325 audit(1768827943.376:700): table=filter:129 family=2 entries=62 op=nft_register_chain pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:43.397968 kernel: audit: type=1300 audit(1768827943.376:700): arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7fff5edf49b0 a2=0 a3=7fff5edf499c items=0 ppid=4321 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.376000 audit[4748]: SYSCALL arch=c000003e syscall=46 success=yes exit=31772 a0=3 a1=7fff5edf49b0 a2=0 a3=7fff5edf499c items=0 ppid=4321 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.376000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:43.402843 kernel: audit: type=1327 audit(1768827943.376:700): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:43.409275 containerd[1646]: time="2026-01-19T13:05:43.409159180Z" level=info msg="connecting to shim c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb" address="unix:///run/containerd/s/217cbdbeadbaf0110f343597cc1b885aaa53dea51040b3b090ece8fd3f9d797b" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:43.461214 systemd[1]: Started cri-containerd-c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb.scope - libcontainer container c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb. Jan 19 13:05:43.484000 audit: BPF prog-id=235 op=LOAD Jan 19 13:05:43.486868 kernel: audit: type=1334 audit(1768827943.484:701): prog-id=235 op=LOAD Jan 19 13:05:43.486000 audit: BPF prog-id=236 op=LOAD Jan 19 13:05:43.486000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.490852 kernel: audit: type=1334 audit(1768827943.486:702): prog-id=236 op=LOAD Jan 19 13:05:43.490908 kernel: audit: type=1300 audit(1768827943.486:702): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.486000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=236 op=UNLOAD Jan 19 13:05:43.501663 kernel: audit: type=1327 audit(1768827943.486:702): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.501762 kernel: audit: type=1334 audit(1768827943.487:703): prog-id=236 op=UNLOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.509127 kernel: audit: type=1300 audit(1768827943.487:703): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.511559 kernel: audit: type=1327 audit(1768827943.487:703): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=237 op=LOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=238 op=LOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=238 op=UNLOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=237 op=UNLOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.487000 audit: BPF prog-id=239 op=LOAD Jan 19 13:05:43.487000 audit[4769]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4757 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:43.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333613933376636303230376462303661643564356539313030323739 Jan 19 13:05:43.562266 containerd[1646]: time="2026-01-19T13:05:43.562207818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-dx6t6,Uid:ed22f491-3777-46a9-8e11-3aad3f6a2fdc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c3a937f60207db06ad5d5e9100279ab40afcbb8d14ad8157927450ea142ff4bb\"" Jan 19 13:05:43.565040 containerd[1646]: time="2026-01-19T13:05:43.565005540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:05:43.881669 containerd[1646]: time="2026-01-19T13:05:43.881563785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:43.883162 containerd[1646]: time="2026-01-19T13:05:43.882942865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:05:43.883162 containerd[1646]: time="2026-01-19T13:05:43.883061116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:43.883640 kubelet[2939]: E0119 13:05:43.883556 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:43.884207 kubelet[2939]: E0119 13:05:43.883665 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:43.884207 kubelet[2939]: E0119 13:05:43.884083 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ktl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:43.885392 kubelet[2939]: E0119 13:05:43.885348 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:05:44.093601 containerd[1646]: time="2026-01-19T13:05:44.093519415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,}" Jan 19 13:05:44.274340 systemd-networkd[1557]: cali1c20ffcc326: Link UP Jan 19 13:05:44.276549 systemd-networkd[1557]: cali1c20ffcc326: Gained carrier Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.162 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0 calico-kube-controllers-77b44585c9- calico-system 2a6c0b6c-6346-4ea5-adab-326e38e7dbe6 855 0 2026-01-19 13:05:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77b44585c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com calico-kube-controllers-77b44585c9-kqfd8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1c20ffcc326 [] [] }} ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.163 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.209 [INFO][4806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" HandleID="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.210 [INFO][4806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" HandleID="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"calico-kube-controllers-77b44585c9-kqfd8", "timestamp":"2026-01-19 13:05:44.209627699 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.210 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.210 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.210 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.222 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.230 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.239 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.242 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.246 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.246 [INFO][4806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.248 [INFO][4806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292 Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.254 [INFO][4806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.265 [INFO][4806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.198/26] block=192.168.35.192/26 handle="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.265 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.198/26] handle="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.265 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:44.305809 containerd[1646]: 2026-01-19 13:05:44.265 [INFO][4806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.198/26] IPv6=[] ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" HandleID="k8s-pod-network.4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.268 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0", GenerateName:"calico-kube-controllers-77b44585c9-", Namespace:"calico-system", SelfLink:"", UID:"2a6c0b6c-6346-4ea5-adab-326e38e7dbe6", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b44585c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-77b44585c9-kqfd8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c20ffcc326", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.268 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.198/32] ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.268 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c20ffcc326 ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.279 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.282 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0", GenerateName:"calico-kube-controllers-77b44585c9-", Namespace:"calico-system", SelfLink:"", UID:"2a6c0b6c-6346-4ea5-adab-326e38e7dbe6", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77b44585c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292", Pod:"calico-kube-controllers-77b44585c9-kqfd8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1c20ffcc326", MAC:"aa:a8:07:15:69:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:44.309791 containerd[1646]: 2026-01-19 13:05:44.300 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" Namespace="calico-system" Pod="calico-kube-controllers-77b44585c9-kqfd8" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--kube--controllers--77b44585c9--kqfd8-eth0" Jan 19 13:05:44.342000 audit[4819]: NETFILTER_CFG table=filter:130 family=2 entries=52 op=nft_register_chain pid=4819 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:44.342000 audit[4819]: SYSCALL arch=c000003e syscall=46 success=yes exit=24328 a0=3 a1=7ffcb0882470 a2=0 a3=7ffcb088245c items=0 ppid=4321 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:44.366846 containerd[1646]: time="2026-01-19T13:05:44.366770233Z" level=info msg="connecting to shim 4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292" address="unix:///run/containerd/s/704c17ab669a28065a4493ad2df3e29b3a4c1149df642d768920b93fbfa7134c" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:44.418234 systemd[1]: Started cri-containerd-4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292.scope - libcontainer container 4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292. Jan 19 13:05:44.442000 audit: BPF prog-id=240 op=LOAD Jan 19 13:05:44.443000 audit: BPF prog-id=241 op=LOAD Jan 19 13:05:44.443000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.444000 audit: BPF prog-id=241 op=UNLOAD Jan 19 13:05:44.444000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.444000 audit: BPF prog-id=242 op=LOAD Jan 19 13:05:44.444000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.444000 audit: BPF prog-id=243 op=LOAD Jan 19 13:05:44.444000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.444000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.445000 audit: BPF prog-id=243 op=UNLOAD Jan 19 13:05:44.445000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.446000 audit: BPF prog-id=242 op=UNLOAD Jan 19 13:05:44.446000 audit[4840]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.446000 audit: BPF prog-id=244 op=LOAD Jan 19 13:05:44.446000 audit[4840]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4829 pid=4840 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465306131393162626366653930653031326536336635316635623963 Jan 19 13:05:44.510494 containerd[1646]: time="2026-01-19T13:05:44.510199644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77b44585c9-kqfd8,Uid:2a6c0b6c-6346-4ea5-adab-326e38e7dbe6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e0a191bbcfe90e012e63f51f5b9ccd80b664f512d0af3b880c7b17a1c4d6292\"" Jan 19 13:05:44.513461 containerd[1646]: time="2026-01-19T13:05:44.512711683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 13:05:44.725030 kubelet[2939]: E0119 13:05:44.724795 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:05:44.764000 audit[4866]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:44.764000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffebc828a30 a2=0 a3=7ffebc828a1c items=0 ppid=3085 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:44.776000 audit[4866]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=4866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:44.776000 audit[4866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffebc828a30 a2=0 a3=7ffebc828a1c items=0 ppid=3085 pid=4866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:44.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:44.825601 containerd[1646]: time="2026-01-19T13:05:44.825523747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:44.830298 containerd[1646]: time="2026-01-19T13:05:44.830225996Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 13:05:44.830498 containerd[1646]: time="2026-01-19T13:05:44.830353336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:44.830740 kubelet[2939]: E0119 13:05:44.830682 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:05:44.830920 kubelet[2939]: E0119 13:05:44.830887 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:05:44.831244 kubelet[2939]: E0119 13:05:44.831169 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:44.832931 kubelet[2939]: E0119 13:05:44.832880 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:05:45.205022 systemd-networkd[1557]: cali48f4abf816c: Gained IPv6LL Jan 19 13:05:45.727148 kubelet[2939]: E0119 13:05:45.726614 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:05:46.037039 systemd-networkd[1557]: cali1c20ffcc326: Gained IPv6LL Jan 19 13:05:46.093103 containerd[1646]: time="2026-01-19T13:05:46.093020376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,}" Jan 19 13:05:46.290126 systemd-networkd[1557]: calie00b3504407: Link UP Jan 19 13:05:46.292072 systemd-networkd[1557]: calie00b3504407: Gained carrier Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.168 [INFO][4869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0 coredns-668d6bf9bc- kube-system 08044186-a9c0-43a1-9659-a816af071539 853 0 2026-01-19 13:04:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com coredns-668d6bf9bc-d5tkr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie00b3504407 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.169 [INFO][4869] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.224 [INFO][4882] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" HandleID="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.224 [INFO][4882] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" HandleID="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-d5tkr", "timestamp":"2026-01-19 13:05:46.224724003 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.225 [INFO][4882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.225 [INFO][4882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.225 [INFO][4882] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.237 [INFO][4882] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.248 [INFO][4882] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.255 [INFO][4882] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.259 [INFO][4882] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.262 [INFO][4882] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.262 [INFO][4882] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.265 [INFO][4882] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6 Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.270 [INFO][4882] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.280 [INFO][4882] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.199/26] block=192.168.35.192/26 handle="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.280 [INFO][4882] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.199/26] handle="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.280 [INFO][4882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:46.318745 containerd[1646]: 2026-01-19 13:05:46.280 [INFO][4882] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.199/26] IPv6=[] ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" HandleID="k8s-pod-network.67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Workload="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.284 [INFO][4869] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08044186-a9c0-43a1-9659-a816af071539", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-d5tkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie00b3504407", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.284 [INFO][4869] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.199/32] ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.284 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie00b3504407 ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.292 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.293 [INFO][4869] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"08044186-a9c0-43a1-9659-a816af071539", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6", Pod:"coredns-668d6bf9bc-d5tkr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie00b3504407", MAC:"82:65:7c:0b:d5:2e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:46.321907 containerd[1646]: 2026-01-19 13:05:46.309 [INFO][4869] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-d5tkr" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-coredns--668d6bf9bc--d5tkr-eth0" Jan 19 13:05:46.358000 audit[4899]: NETFILTER_CFG table=filter:133 family=2 entries=58 op=nft_register_chain pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:46.358000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=26760 a0=3 a1=7fff82ed0240 a2=0 a3=7fff82ed022c items=0 ppid=4321 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.358000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:46.402543 containerd[1646]: time="2026-01-19T13:05:46.402483318Z" level=info msg="connecting to shim 67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6" address="unix:///run/containerd/s/0b7f9f894b14887ba8950fe0447e8b1d89eee600c69e8d1780db7eb067760b64" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:46.443198 systemd[1]: Started cri-containerd-67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6.scope - libcontainer container 67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6. Jan 19 13:05:46.463000 audit: BPF prog-id=245 op=LOAD Jan 19 13:05:46.464000 audit: BPF prog-id=246 op=LOAD Jan 19 13:05:46.464000 audit[4919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.464000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=246 op=UNLOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=247 op=LOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=248 op=LOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=248 op=UNLOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=247 op=UNLOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.465000 audit: BPF prog-id=249 op=LOAD Jan 19 13:05:46.465000 audit[4919]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4908 pid=4919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.465000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637623436663531373432653464633034313236316262316638663262 Jan 19 13:05:46.516967 containerd[1646]: time="2026-01-19T13:05:46.516919043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d5tkr,Uid:08044186-a9c0-43a1-9659-a816af071539,Namespace:kube-system,Attempt:0,} returns sandbox id \"67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6\"" Jan 19 13:05:46.521098 containerd[1646]: time="2026-01-19T13:05:46.521045948Z" level=info msg="CreateContainer within sandbox \"67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 19 13:05:46.534534 containerd[1646]: time="2026-01-19T13:05:46.533891602Z" level=info msg="Container a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe: CDI devices from CRI Config.CDIDevices: []" Jan 19 13:05:46.539802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1378146234.mount: Deactivated successfully. Jan 19 13:05:46.547889 containerd[1646]: time="2026-01-19T13:05:46.546973710Z" level=info msg="CreateContainer within sandbox \"67b46f51742e4dc041261bb1f8f2b2081455c8c1421fb6341517fce9137647e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe\"" Jan 19 13:05:46.548043 containerd[1646]: time="2026-01-19T13:05:46.547926775Z" level=info msg="StartContainer for \"a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe\"" Jan 19 13:05:46.549745 containerd[1646]: time="2026-01-19T13:05:46.549709330Z" level=info msg="connecting to shim a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe" address="unix:///run/containerd/s/0b7f9f894b14887ba8950fe0447e8b1d89eee600c69e8d1780db7eb067760b64" protocol=ttrpc version=3 Jan 19 13:05:46.582243 systemd[1]: Started cri-containerd-a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe.scope - libcontainer container a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe. Jan 19 13:05:46.604000 audit: BPF prog-id=250 op=LOAD Jan 19 13:05:46.605000 audit: BPF prog-id=251 op=LOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=251 op=UNLOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=252 op=LOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=253 op=LOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=253 op=UNLOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=252 op=UNLOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.605000 audit: BPF prog-id=254 op=LOAD Jan 19 13:05:46.605000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4908 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133656161323639393663396334376336643836373531646465636431 Jan 19 13:05:46.637382 containerd[1646]: time="2026-01-19T13:05:46.637242629Z" level=info msg="StartContainer for \"a3eaa26996c9c47c6d86751ddecd1d9dc252f12187cb4f54f597479dffa4e0fe\" returns successfully" Jan 19 13:05:46.777000 audit[4979]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:46.777000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6ceb8e50 a2=0 a3=7ffd6ceb8e3c items=0 ppid=3085 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:46.806000 audit[4979]: NETFILTER_CFG table=nat:135 family=2 entries=56 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:46.806000 audit[4979]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd6ceb8e50 a2=0 a3=7ffd6ceb8e3c items=0 ppid=3085 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:46.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:47.093274 containerd[1646]: time="2026-01-19T13:05:47.092855551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,}" Jan 19 13:05:47.277015 systemd-networkd[1557]: cali667bc477211: Link UP Jan 19 13:05:47.277443 systemd-networkd[1557]: cali667bc477211: Gained carrier Jan 19 13:05:47.299321 kubelet[2939]: I0119 13:05:47.298970 2939 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d5tkr" podStartSLOduration=63.298938384 podStartE2EDuration="1m3.298938384s" podCreationTimestamp="2026-01-19 13:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-19 13:05:46.749430926 +0000 UTC m=+68.902325553" watchObservedRunningTime="2026-01-19 13:05:47.298938384 +0000 UTC m=+69.451833007" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.170 [INFO][4982] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0 calico-apiserver-77bb946844- calico-apiserver 207bff47-91b8-40f6-a83c-1de3cb3c792c 858 0 2026-01-19 13:04:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77bb946844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-hsmf0.gb1.brightbox.com calico-apiserver-77bb946844-22dld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali667bc477211 [] [] }} ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.171 [INFO][4982] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.214 [INFO][4993] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" HandleID="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.214 [INFO][4993] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" HandleID="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-hsmf0.gb1.brightbox.com", "pod":"calico-apiserver-77bb946844-22dld", "timestamp":"2026-01-19 13:05:47.214501023 +0000 UTC"}, Hostname:"srv-hsmf0.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.214 [INFO][4993] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.214 [INFO][4993] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.214 [INFO][4993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-hsmf0.gb1.brightbox.com' Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.225 [INFO][4993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.232 [INFO][4993] ipam/ipam.go 394: Looking up existing affinities for host host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.239 [INFO][4993] ipam/ipam.go 511: Trying affinity for 192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.242 [INFO][4993] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.245 [INFO][4993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.192/26 host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.245 [INFO][4993] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.35.192/26 handle="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.248 [INFO][4993] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2 Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.255 [INFO][4993] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.35.192/26 handle="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.264 [INFO][4993] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.35.200/26] block=192.168.35.192/26 handle="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.264 [INFO][4993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.200/26] handle="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" host="srv-hsmf0.gb1.brightbox.com" Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.264 [INFO][4993] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 19 13:05:47.301541 containerd[1646]: 2026-01-19 13:05:47.264 [INFO][4993] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.35.200/26] IPv6=[] ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" HandleID="k8s-pod-network.1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Workload="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.269 [INFO][4982] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0", GenerateName:"calico-apiserver-77bb946844-", Namespace:"calico-apiserver", SelfLink:"", UID:"207bff47-91b8-40f6-a83c-1de3cb3c792c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb946844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-77bb946844-22dld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali667bc477211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.269 [INFO][4982] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.200/32] ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.269 [INFO][4982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali667bc477211 ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.276 [INFO][4982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.278 [INFO][4982] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0", GenerateName:"calico-apiserver-77bb946844-", Namespace:"calico-apiserver", SelfLink:"", UID:"207bff47-91b8-40f6-a83c-1de3cb3c792c", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 19, 13, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77bb946844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-hsmf0.gb1.brightbox.com", ContainerID:"1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2", Pod:"calico-apiserver-77bb946844-22dld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali667bc477211", MAC:"52:78:88:91:25:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 19 13:05:47.303214 containerd[1646]: 2026-01-19 13:05:47.296 [INFO][4982] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" Namespace="calico-apiserver" Pod="calico-apiserver-77bb946844-22dld" WorkloadEndpoint="srv--hsmf0.gb1.brightbox.com-k8s-calico--apiserver--77bb946844--22dld-eth0" Jan 19 13:05:47.317032 systemd-networkd[1557]: calie00b3504407: Gained IPv6LL Jan 19 13:05:47.352000 audit[5007]: NETFILTER_CFG table=filter:136 family=2 entries=57 op=nft_register_chain pid=5007 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 19 13:05:47.352000 audit[5007]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffc11c86470 a2=0 a3=7ffc11c8645c items=0 ppid=4321 pid=5007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.352000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 19 13:05:47.355545 containerd[1646]: time="2026-01-19T13:05:47.355428384Z" level=info msg="connecting to shim 1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2" address="unix:///run/containerd/s/71e4ba9fa2debc1d4d81aefcae4007139473dc11cc279ee5ade48cf49bcbb630" namespace=k8s.io protocol=ttrpc version=3 Jan 19 13:05:47.399119 systemd[1]: Started cri-containerd-1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2.scope - libcontainer container 1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2. Jan 19 13:05:47.417000 audit: BPF prog-id=255 op=LOAD Jan 19 13:05:47.417000 audit: BPF prog-id=256 op=LOAD Jan 19 13:05:47.417000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.417000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=256 op=UNLOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=257 op=LOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=258 op=LOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=258 op=UNLOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=257 op=UNLOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.418000 audit: BPF prog-id=259 op=LOAD Jan 19 13:05:47.418000 audit[5027]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5015 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:47.418000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138303661303532663133376366353762336362623761383633366339 Jan 19 13:05:47.472494 containerd[1646]: time="2026-01-19T13:05:47.472405180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77bb946844-22dld,Uid:207bff47-91b8-40f6-a83c-1de3cb3c792c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1806a052f137cf57b3cbb7a8636c97a37936b64299b7b198af0f31dafca196f2\"" Jan 19 13:05:47.475377 containerd[1646]: time="2026-01-19T13:05:47.475331525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:05:47.790222 containerd[1646]: time="2026-01-19T13:05:47.789766303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:47.791075 containerd[1646]: time="2026-01-19T13:05:47.791018222Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:05:47.791171 containerd[1646]: time="2026-01-19T13:05:47.791142986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:47.792369 kubelet[2939]: E0119 13:05:47.791284 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:47.792369 kubelet[2939]: E0119 13:05:47.791337 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:47.792369 kubelet[2939]: E0119 13:05:47.791509 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7rrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:47.792782 kubelet[2939]: E0119 13:05:47.792695 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:05:48.114062 containerd[1646]: time="2026-01-19T13:05:48.114005666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 13:05:48.431531 containerd[1646]: time="2026-01-19T13:05:48.431368726Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:48.433900 containerd[1646]: time="2026-01-19T13:05:48.433718991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 13:05:48.433900 containerd[1646]: time="2026-01-19T13:05:48.433772616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:48.434173 kubelet[2939]: E0119 13:05:48.434106 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:05:48.434658 kubelet[2939]: E0119 13:05:48.434270 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:05:48.434658 kubelet[2939]: E0119 13:05:48.434531 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:818d65b8ce8c49218c255fe0e28b9b06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:48.437670 containerd[1646]: time="2026-01-19T13:05:48.437369982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 13:05:48.746994 containerd[1646]: time="2026-01-19T13:05:48.746856915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:48.749785 containerd[1646]: time="2026-01-19T13:05:48.748701149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 13:05:48.749785 containerd[1646]: time="2026-01-19T13:05:48.748792008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:48.750572 kubelet[2939]: E0119 13:05:48.750490 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:05:48.750810 kubelet[2939]: E0119 13:05:48.750781 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:05:48.751150 kubelet[2939]: E0119 13:05:48.751080 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:48.753077 kubelet[2939]: E0119 13:05:48.752974 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:05:48.760950 kubelet[2939]: E0119 13:05:48.760646 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:05:48.791000 audit[5060]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:48.795916 kernel: kauditd_printk_skb: 124 callbacks suppressed Jan 19 13:05:48.796196 kernel: audit: type=1325 audit(1768827948.791:748): table=filter:137 family=2 entries=14 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:48.791000 audit[5060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffb60d060 a2=0 a3=7ffffb60d04c items=0 ppid=3085 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:48.801623 kernel: audit: type=1300 audit(1768827948.791:748): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffffb60d060 a2=0 a3=7ffffb60d04c items=0 ppid=3085 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:48.791000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:48.803000 audit[5060]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:48.810787 kernel: audit: type=1327 audit(1768827948.791:748): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:48.811523 kernel: audit: type=1325 audit(1768827948.803:749): table=nat:138 family=2 entries=20 op=nft_register_rule pid=5060 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:05:48.803000 audit[5060]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffb60d060 a2=0 a3=7ffffb60d04c items=0 ppid=3085 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:48.820848 kernel: audit: type=1300 audit(1768827948.803:749): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffffb60d060 a2=0 a3=7ffffb60d04c items=0 ppid=3085 pid=5060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:05:48.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:48.826342 kernel: audit: type=1327 audit(1768827948.803:749): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:05:48.981509 systemd-networkd[1557]: cali667bc477211: Gained IPv6LL Jan 19 13:05:49.096037 containerd[1646]: time="2026-01-19T13:05:49.095478247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 13:05:49.435019 containerd[1646]: time="2026-01-19T13:05:49.434962097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:49.437615 containerd[1646]: time="2026-01-19T13:05:49.437478095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 13:05:49.437615 containerd[1646]: time="2026-01-19T13:05:49.437585232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:49.437876 kubelet[2939]: E0119 13:05:49.437784 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:05:49.439377 kubelet[2939]: E0119 13:05:49.437894 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:05:49.439377 kubelet[2939]: E0119 13:05:49.438121 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh5hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:49.440188 kubelet[2939]: E0119 13:05:49.440079 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:05:51.095101 containerd[1646]: time="2026-01-19T13:05:51.095049904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 13:05:51.405951 containerd[1646]: time="2026-01-19T13:05:51.405864936Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:51.407155 containerd[1646]: time="2026-01-19T13:05:51.407091789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 13:05:51.407410 containerd[1646]: time="2026-01-19T13:05:51.407249696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:51.407554 kubelet[2939]: E0119 13:05:51.407471 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:05:51.408098 kubelet[2939]: E0119 13:05:51.407573 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:05:51.408098 kubelet[2939]: E0119 13:05:51.407847 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:51.412901 containerd[1646]: time="2026-01-19T13:05:51.412598542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 13:05:51.725707 containerd[1646]: time="2026-01-19T13:05:51.725526302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:51.726950 containerd[1646]: time="2026-01-19T13:05:51.726898622Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 13:05:51.727030 containerd[1646]: time="2026-01-19T13:05:51.726996989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:51.727277 kubelet[2939]: E0119 13:05:51.727216 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:05:51.727367 kubelet[2939]: E0119 13:05:51.727295 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:05:51.727518 kubelet[2939]: E0119 13:05:51.727442 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:51.728980 kubelet[2939]: E0119 13:05:51.728905 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:05:55.471568 systemd[1]: Started sshd@11-10.243.74.46:22-188.166.92.220:52620.service - OpenSSH per-connection server daemon (188.166.92.220:52620). Jan 19 13:05:55.486171 kernel: audit: type=1130 audit(1768827955.471:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.243.74.46:22-188.166.92.220:52620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:55.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.243.74.46:22-188.166.92.220:52620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:55.666461 sshd[5065]: Connection closed by authenticating user root 188.166.92.220 port 52620 [preauth] Jan 19 13:05:55.665000 audit[5065]: USER_ERR pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:05:55.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.243.74.46:22-188.166.92.220:52620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:55.670295 systemd[1]: sshd@11-10.243.74.46:22-188.166.92.220:52620.service: Deactivated successfully. Jan 19 13:05:55.673836 kernel: audit: type=1109 audit(1768827955.665:751): pid=5065 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:05:55.674263 kernel: audit: type=1131 audit(1768827955.669:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.243.74.46:22-188.166.92.220:52620 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:05:58.096796 containerd[1646]: time="2026-01-19T13:05:58.095167198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 13:05:58.406516 containerd[1646]: time="2026-01-19T13:05:58.406429707Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:58.408534 containerd[1646]: time="2026-01-19T13:05:58.408486936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 13:05:58.409108 containerd[1646]: time="2026-01-19T13:05:58.408590104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:58.409778 kubelet[2939]: E0119 13:05:58.409001 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:05:58.409778 kubelet[2939]: E0119 13:05:58.409095 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:05:58.409778 kubelet[2939]: E0119 13:05:58.409347 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:58.412006 kubelet[2939]: E0119 13:05:58.411898 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:05:59.093984 containerd[1646]: time="2026-01-19T13:05:59.093746082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:05:59.418388 containerd[1646]: time="2026-01-19T13:05:59.418206333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:05:59.420293 containerd[1646]: time="2026-01-19T13:05:59.420258382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:05:59.434459 containerd[1646]: time="2026-01-19T13:05:59.420180979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:05:59.434895 kubelet[2939]: E0119 13:05:59.434783 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:59.435397 kubelet[2939]: E0119 13:05:59.434915 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:05:59.435731 kubelet[2939]: E0119 13:05:59.435602 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ktl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:05:59.437016 kubelet[2939]: E0119 13:05:59.436935 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:06:00.094899 containerd[1646]: time="2026-01-19T13:06:00.093461515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:06:00.406228 containerd[1646]: time="2026-01-19T13:06:00.406141701Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:00.407912 containerd[1646]: time="2026-01-19T13:06:00.407811132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:06:00.407912 containerd[1646]: time="2026-01-19T13:06:00.407862074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:00.408284 kubelet[2939]: E0119 13:06:00.408201 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:00.408284 kubelet[2939]: E0119 13:06:00.408304 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:00.408922 kubelet[2939]: E0119 13:06:00.408800 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7rrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:00.410392 kubelet[2939]: E0119 13:06:00.410324 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:06:01.094491 kubelet[2939]: E0119 13:06:01.094351 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:06:02.098721 kubelet[2939]: E0119 13:06:02.098589 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:06:04.094963 kubelet[2939]: E0119 13:06:04.094802 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:06:08.524155 systemd[1]: Started sshd@12-10.243.74.46:22-68.220.241.50:45526.service - OpenSSH per-connection server daemon (68.220.241.50:45526). Jan 19 13:06:08.536934 kernel: audit: type=1130 audit(1768827968.522:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.243.74.46:22-68.220.241.50:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:08.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.243.74.46:22-68.220.241.50:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:09.146000 audit[5112]: USER_ACCT pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.149917 sshd[5112]: Accepted publickey for core from 68.220.241.50 port 45526 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:09.154888 kernel: audit: type=1101 audit(1768827969.146:754): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.157111 kernel: audit: type=1103 audit(1768827969.149:755): pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.149000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.154388 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:09.169843 kernel: audit: type=1006 audit(1768827969.149:756): pid=5112 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 19 13:06:09.149000 audit[5112]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1fa25990 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:09.170951 systemd-logind[1621]: New session 11 of user core. Jan 19 13:06:09.178175 kernel: audit: type=1300 audit(1768827969.149:756): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1fa25990 a2=3 a3=0 items=0 ppid=1 pid=5112 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:09.178383 kernel: audit: type=1327 audit(1768827969.149:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:09.149000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:09.181153 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 19 13:06:09.187000 audit[5112]: USER_START pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.195849 kernel: audit: type=1105 audit(1768827969.187:757): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.196000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:09.204859 kernel: audit: type=1103 audit(1768827969.196:758): pid=5116 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:10.201584 sshd[5116]: Connection closed by 68.220.241.50 port 45526 Jan 19 13:06:10.204776 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:10.216000 audit[5112]: USER_END pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:10.230917 kernel: audit: type=1106 audit(1768827970.216:759): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:10.231899 systemd[1]: sshd@12-10.243.74.46:22-68.220.241.50:45526.service: Deactivated successfully. Jan 19 13:06:10.235976 systemd[1]: session-11.scope: Deactivated successfully. Jan 19 13:06:10.238504 systemd-logind[1621]: Session 11 logged out. Waiting for processes to exit. Jan 19 13:06:10.216000 audit[5112]: CRED_DISP pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:10.246768 kernel: audit: type=1104 audit(1768827970.216:760): pid=5112 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:10.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.243.74.46:22-68.220.241.50:45526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:10.249373 systemd-logind[1621]: Removed session 11. Jan 19 13:06:12.098623 containerd[1646]: time="2026-01-19T13:06:12.098163807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 13:06:12.432485 containerd[1646]: time="2026-01-19T13:06:12.432313211Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:12.433878 containerd[1646]: time="2026-01-19T13:06:12.433800122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 13:06:12.433982 containerd[1646]: time="2026-01-19T13:06:12.433954117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:12.434332 kubelet[2939]: E0119 13:06:12.434246 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:06:12.434974 kubelet[2939]: E0119 13:06:12.434373 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:06:12.435424 kubelet[2939]: E0119 13:06:12.435325 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh5hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:12.436636 kubelet[2939]: E0119 13:06:12.436591 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:06:13.095718 kubelet[2939]: E0119 13:06:13.095644 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:06:13.098495 kubelet[2939]: E0119 13:06:13.098449 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:06:14.096853 containerd[1646]: time="2026-01-19T13:06:14.095575617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 13:06:14.097655 kubelet[2939]: E0119 13:06:14.096766 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:06:14.425237 containerd[1646]: time="2026-01-19T13:06:14.425156424Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:14.426450 containerd[1646]: time="2026-01-19T13:06:14.426379321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 13:06:14.426589 containerd[1646]: time="2026-01-19T13:06:14.426514351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:14.426915 kubelet[2939]: E0119 13:06:14.426865 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:06:14.427107 kubelet[2939]: E0119 13:06:14.427075 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:06:14.427579 kubelet[2939]: E0119 13:06:14.427486 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:14.430935 containerd[1646]: time="2026-01-19T13:06:14.430898639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 13:06:14.744448 containerd[1646]: time="2026-01-19T13:06:14.743633658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:14.745652 containerd[1646]: time="2026-01-19T13:06:14.745533798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 13:06:14.746623 containerd[1646]: time="2026-01-19T13:06:14.745596847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:14.746738 kubelet[2939]: E0119 13:06:14.746139 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:06:14.746738 kubelet[2939]: E0119 13:06:14.746223 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:06:14.746738 kubelet[2939]: E0119 13:06:14.746407 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:14.748383 kubelet[2939]: E0119 13:06:14.748274 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:06:15.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.243.74.46:22-68.220.241.50:57728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:15.315515 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:15.315666 kernel: audit: type=1130 audit(1768827975.307:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.243.74.46:22-68.220.241.50:57728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:15.308013 systemd[1]: Started sshd@13-10.243.74.46:22-68.220.241.50:57728.service - OpenSSH per-connection server daemon (68.220.241.50:57728). Jan 19 13:06:15.846000 audit[5133]: USER_ACCT pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.856936 kernel: audit: type=1101 audit(1768827975.846:763): pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.857031 sshd[5133]: Accepted publickey for core from 68.220.241.50 port 57728 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:15.855000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.862735 sshd-session[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:15.866129 kernel: audit: type=1103 audit(1768827975.855:764): pid=5133 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.875936 kernel: audit: type=1006 audit(1768827975.855:765): pid=5133 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 19 13:06:15.881859 systemd-logind[1621]: New session 12 of user core. Jan 19 13:06:15.895712 kernel: audit: type=1300 audit(1768827975.855:765): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8b20e460 a2=3 a3=0 items=0 ppid=1 pid=5133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:15.855000 audit[5133]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff8b20e460 a2=3 a3=0 items=0 ppid=1 pid=5133 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:15.855000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:15.899894 kernel: audit: type=1327 audit(1768827975.855:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:15.901183 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 19 13:06:15.915943 kernel: audit: type=1105 audit(1768827975.908:766): pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.908000 audit[5133]: USER_START pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.920000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:15.925853 kernel: audit: type=1103 audit(1768827975.920:767): pid=5139 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:16.293351 sshd[5139]: Connection closed by 68.220.241.50 port 57728 Jan 19 13:06:16.294670 sshd-session[5133]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:16.296000 audit[5133]: USER_END pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:16.305865 kernel: audit: type=1106 audit(1768827976.296:768): pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:16.308643 systemd[1]: sshd@13-10.243.74.46:22-68.220.241.50:57728.service: Deactivated successfully. Jan 19 13:06:16.296000 audit[5133]: CRED_DISP pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:16.318169 systemd[1]: session-12.scope: Deactivated successfully. Jan 19 13:06:16.319324 kernel: audit: type=1104 audit(1768827976.296:769): pid=5133 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:16.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.243.74.46:22-68.220.241.50:57728 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:16.322086 systemd-logind[1621]: Session 12 logged out. Waiting for processes to exit. Jan 19 13:06:16.325222 systemd-logind[1621]: Removed session 12. Jan 19 13:06:18.108688 containerd[1646]: time="2026-01-19T13:06:18.108610223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 13:06:18.431244 containerd[1646]: time="2026-01-19T13:06:18.430716661Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:18.432907 containerd[1646]: time="2026-01-19T13:06:18.432789230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 13:06:18.433125 containerd[1646]: time="2026-01-19T13:06:18.432901376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:18.433829 kubelet[2939]: E0119 13:06:18.433464 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:06:18.433829 kubelet[2939]: E0119 13:06:18.433541 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:06:18.433829 kubelet[2939]: E0119 13:06:18.433718 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:818d65b8ce8c49218c255fe0e28b9b06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:18.436307 containerd[1646]: time="2026-01-19T13:06:18.436208771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 13:06:18.747195 containerd[1646]: time="2026-01-19T13:06:18.747015131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:18.749195 containerd[1646]: time="2026-01-19T13:06:18.749157326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:18.749516 containerd[1646]: time="2026-01-19T13:06:18.749133170Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 13:06:18.749892 kubelet[2939]: E0119 13:06:18.749777 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:06:18.749997 kubelet[2939]: E0119 13:06:18.749915 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:06:18.750182 kubelet[2939]: E0119 13:06:18.750105 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:18.751789 kubelet[2939]: E0119 13:06:18.751714 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:06:21.412549 systemd[1]: Started sshd@14-10.243.74.46:22-68.220.241.50:57734.service - OpenSSH per-connection server daemon (68.220.241.50:57734). Jan 19 13:06:21.417167 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:21.417407 kernel: audit: type=1130 audit(1768827981.411:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.243.74.46:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:21.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.243.74.46:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:21.962000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:21.980961 kernel: audit: type=1101 audit(1768827981.962:772): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:21.981085 kernel: audit: type=1103 audit(1768827981.973:773): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:21.973000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:21.981330 sshd[5157]: Accepted publickey for core from 68.220.241.50 port 57734 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:21.977553 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:21.974000 audit[5157]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde7a39360 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:21.993278 kernel: audit: type=1006 audit(1768827981.974:774): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 19 13:06:21.993393 kernel: audit: type=1300 audit(1768827981.974:774): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde7a39360 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:21.996589 systemd-logind[1621]: New session 13 of user core. Jan 19 13:06:21.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:21.999852 kernel: audit: type=1327 audit(1768827981.974:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:22.001414 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 19 13:06:22.009000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.016858 kernel: audit: type=1105 audit(1768827982.009:775): pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.018000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.024867 kernel: audit: type=1103 audit(1768827982.018:776): pid=5161 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.434961 sshd[5161]: Connection closed by 68.220.241.50 port 57734 Jan 19 13:06:22.435998 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:22.440000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.453847 kernel: audit: type=1106 audit(1768827982.440:777): pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.458571 systemd[1]: sshd@14-10.243.74.46:22-68.220.241.50:57734.service: Deactivated successfully. Jan 19 13:06:22.458903 systemd-logind[1621]: Session 13 logged out. Waiting for processes to exit. Jan 19 13:06:22.440000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.464982 systemd[1]: session-13.scope: Deactivated successfully. Jan 19 13:06:22.465841 kernel: audit: type=1104 audit(1768827982.440:778): pid=5157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:22.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.243.74.46:22-68.220.241.50:57734 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:22.471784 systemd-logind[1621]: Removed session 13. Jan 19 13:06:25.096452 containerd[1646]: time="2026-01-19T13:06:25.095971274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:06:25.445321 containerd[1646]: time="2026-01-19T13:06:25.444955031Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:25.447417 containerd[1646]: time="2026-01-19T13:06:25.447295241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:06:25.447591 containerd[1646]: time="2026-01-19T13:06:25.447464427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:25.448444 kubelet[2939]: E0119 13:06:25.447912 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:25.448444 kubelet[2939]: E0119 13:06:25.448019 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:25.450843 kubelet[2939]: E0119 13:06:25.449479 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7rrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:25.452385 kubelet[2939]: E0119 13:06:25.452313 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:06:26.097021 kubelet[2939]: E0119 13:06:26.096531 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:06:27.110042 kubelet[2939]: E0119 13:06:27.109893 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:06:27.112695 containerd[1646]: time="2026-01-19T13:06:27.112143149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:06:27.288114 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:27.288318 kernel: audit: type=1130 audit(1768827987.279:780): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.243.74.46:22-188.166.92.220:55516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.243.74.46:22-188.166.92.220:55516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.280376 systemd[1]: Started sshd@15-10.243.74.46:22-188.166.92.220:55516.service - OpenSSH per-connection server daemon (188.166.92.220:55516). Jan 19 13:06:27.436905 sshd[5174]: Connection closed by authenticating user root 188.166.92.220 port 55516 [preauth] Jan 19 13:06:27.439048 containerd[1646]: time="2026-01-19T13:06:27.438490359Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:27.439000 audit[5174]: USER_ERR pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:06:27.445841 kernel: audit: type=1109 audit(1768827987.439:781): pid=5174 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:06:27.446735 containerd[1646]: time="2026-01-19T13:06:27.446594534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:06:27.447163 containerd[1646]: time="2026-01-19T13:06:27.447126822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:27.450485 systemd[1]: sshd@15-10.243.74.46:22-188.166.92.220:55516.service: Deactivated successfully. Jan 19 13:06:27.451968 kubelet[2939]: E0119 13:06:27.451846 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:27.451968 kubelet[2939]: E0119 13:06:27.451923 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:06:27.452272 kubelet[2939]: E0119 13:06:27.452143 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ktl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:27.453473 kubelet[2939]: E0119 13:06:27.453404 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:06:27.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.243.74.46:22-188.166.92.220:55516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.459866 kernel: audit: type=1131 audit(1768827987.450:782): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.243.74.46:22-188.166.92.220:55516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.549864 kernel: audit: type=1130 audit(1768827987.543:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.243.74.46:22-68.220.241.50:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.243.74.46:22-68.220.241.50:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:27.544576 systemd[1]: Started sshd@16-10.243.74.46:22-68.220.241.50:40826.service - OpenSSH per-connection server daemon (68.220.241.50:40826). Jan 19 13:06:28.093000 audit[5182]: USER_ACCT pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.105851 kernel: audit: type=1101 audit(1768827988.093:784): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.107025 sshd[5182]: Accepted publickey for core from 68.220.241.50 port 40826 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:28.110264 containerd[1646]: time="2026-01-19T13:06:28.110200732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 13:06:28.124505 kernel: audit: type=1103 audit(1768827988.117:785): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.117000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.127760 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:28.131867 kernel: audit: type=1006 audit(1768827988.117:786): pid=5182 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 19 13:06:28.138686 kernel: audit: type=1300 audit(1768827988.117:786): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadd32b30 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:28.117000 audit[5182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdadd32b30 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:28.117000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:28.143368 kernel: audit: type=1327 audit(1768827988.117:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:28.158983 systemd-logind[1621]: New session 14 of user core. Jan 19 13:06:28.165306 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 19 13:06:28.177000 audit[5182]: USER_START pid=5182 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.184956 kernel: audit: type=1105 audit(1768827988.177:787): pid=5182 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.184000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.489608 containerd[1646]: time="2026-01-19T13:06:28.489362671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:28.491737 containerd[1646]: time="2026-01-19T13:06:28.491406969Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 13:06:28.491912 containerd[1646]: time="2026-01-19T13:06:28.491874897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:28.492510 kubelet[2939]: E0119 13:06:28.492437 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:06:28.493446 kubelet[2939]: E0119 13:06:28.493029 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:06:28.493685 kubelet[2939]: E0119 13:06:28.493581 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:28.494907 kubelet[2939]: E0119 13:06:28.494844 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:06:28.536014 sshd[5186]: Connection closed by 68.220.241.50 port 40826 Jan 19 13:06:28.536596 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:28.541000 audit[5182]: USER_END pid=5182 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.541000 audit[5182]: CRED_DISP pid=5182 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:28.548360 systemd-logind[1621]: Session 14 logged out. Waiting for processes to exit. Jan 19 13:06:28.550310 systemd[1]: sshd@16-10.243.74.46:22-68.220.241.50:40826.service: Deactivated successfully. Jan 19 13:06:28.551000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.243.74.46:22-68.220.241.50:40826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:28.556901 systemd[1]: session-14.scope: Deactivated successfully. Jan 19 13:06:28.560496 systemd-logind[1621]: Removed session 14. Jan 19 13:06:28.644202 systemd[1]: Started sshd@17-10.243.74.46:22-68.220.241.50:40836.service - OpenSSH per-connection server daemon (68.220.241.50:40836). Jan 19 13:06:28.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.243.74.46:22-68.220.241.50:40836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:29.202000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.203938 sshd[5199]: Accepted publickey for core from 68.220.241.50 port 40836 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:29.204000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.204000 audit[5199]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb3cb3a30 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:29.204000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:29.207176 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:29.218305 systemd-logind[1621]: New session 15 of user core. Jan 19 13:06:29.225203 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 19 13:06:29.231000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.235000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.699876 sshd[5203]: Connection closed by 68.220.241.50 port 40836 Jan 19 13:06:29.700647 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:29.703000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.703000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:29.710812 systemd[1]: sshd@17-10.243.74.46:22-68.220.241.50:40836.service: Deactivated successfully. Jan 19 13:06:29.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.243.74.46:22-68.220.241.50:40836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:29.718402 systemd[1]: session-15.scope: Deactivated successfully. Jan 19 13:06:29.722809 systemd-logind[1621]: Session 15 logged out. Waiting for processes to exit. Jan 19 13:06:29.725982 systemd-logind[1621]: Removed session 15. Jan 19 13:06:29.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.243.74.46:22-68.220.241.50:40840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:29.805670 systemd[1]: Started sshd@18-10.243.74.46:22-68.220.241.50:40840.service - OpenSSH per-connection server daemon (68.220.241.50:40840). Jan 19 13:06:30.336000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.337444 sshd[5213]: Accepted publickey for core from 68.220.241.50 port 40840 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:30.339000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.339000 audit[5213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffde720fc50 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:30.339000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:30.343756 sshd-session[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:30.361033 systemd-logind[1621]: New session 16 of user core. Jan 19 13:06:30.365350 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 19 13:06:30.374000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.380000 audit[5217]: CRED_ACQ pid=5217 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.747035 sshd[5217]: Connection closed by 68.220.241.50 port 40840 Jan 19 13:06:30.748247 sshd-session[5213]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:30.750000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.750000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:30.756302 systemd-logind[1621]: Session 16 logged out. Waiting for processes to exit. Jan 19 13:06:30.757704 systemd[1]: sshd@18-10.243.74.46:22-68.220.241.50:40840.service: Deactivated successfully. Jan 19 13:06:30.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.243.74.46:22-68.220.241.50:40840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:30.764658 systemd[1]: session-16.scope: Deactivated successfully. Jan 19 13:06:30.768904 systemd-logind[1621]: Removed session 16. Jan 19 13:06:32.112692 kubelet[2939]: E0119 13:06:32.112051 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:06:35.873674 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 19 13:06:35.873880 kernel: audit: type=1130 audit(1768827995.859:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.243.74.46:22-68.220.241.50:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:35.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.243.74.46:22-68.220.241.50:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:35.860124 systemd[1]: Started sshd@19-10.243.74.46:22-68.220.241.50:60072.service - OpenSSH per-connection server daemon (68.220.241.50:60072). Jan 19 13:06:36.419000 audit[5255]: USER_ACCT pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.431331 kernel: audit: type=1101 audit(1768827996.419:811): pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.431421 sshd[5255]: Accepted publickey for core from 68.220.241.50 port 60072 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:36.439220 sshd-session[5255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:36.435000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.449924 kernel: audit: type=1103 audit(1768827996.435:812): pid=5255 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.460153 kernel: audit: type=1006 audit(1768827996.435:813): pid=5255 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 19 13:06:36.462442 systemd-logind[1621]: New session 17 of user core. Jan 19 13:06:36.435000 audit[5255]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9aace8d0 a2=3 a3=0 items=0 ppid=1 pid=5255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:36.473134 kernel: audit: type=1300 audit(1768827996.435:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe9aace8d0 a2=3 a3=0 items=0 ppid=1 pid=5255 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:36.435000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:36.478933 kernel: audit: type=1327 audit(1768827996.435:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:36.479440 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 19 13:06:36.487000 audit[5255]: USER_START pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.495972 kernel: audit: type=1105 audit(1768827996.487:814): pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.495000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.504027 kernel: audit: type=1103 audit(1768827996.495:815): pid=5264 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.866535 sshd[5264]: Connection closed by 68.220.241.50 port 60072 Jan 19 13:06:36.867043 sshd-session[5255]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:36.870000 audit[5255]: USER_END pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.884883 kernel: audit: type=1106 audit(1768827996.870:816): pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.870000 audit[5255]: CRED_DISP pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.891848 kernel: audit: type=1104 audit(1768827996.870:817): pid=5255 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:36.892556 systemd[1]: sshd@19-10.243.74.46:22-68.220.241.50:60072.service: Deactivated successfully. Jan 19 13:06:36.897681 systemd[1]: session-17.scope: Deactivated successfully. Jan 19 13:06:36.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.243.74.46:22-68.220.241.50:60072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:36.904846 systemd-logind[1621]: Session 17 logged out. Waiting for processes to exit. Jan 19 13:06:36.910417 systemd-logind[1621]: Removed session 17. Jan 19 13:06:38.097356 kubelet[2939]: E0119 13:06:38.097276 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:06:39.096057 kubelet[2939]: E0119 13:06:39.095586 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:06:40.096370 kubelet[2939]: E0119 13:06:40.096262 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:06:41.094466 kubelet[2939]: E0119 13:06:41.094383 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:06:41.983323 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:41.983540 kernel: audit: type=1130 audit(1768828001.978:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.243.74.46:22-68.220.241.50:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:41.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.243.74.46:22-68.220.241.50:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:41.978935 systemd[1]: Started sshd@20-10.243.74.46:22-68.220.241.50:60086.service - OpenSSH per-connection server daemon (68.220.241.50:60086). Jan 19 13:06:42.097793 kubelet[2939]: E0119 13:06:42.097672 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:06:42.516000 audit[5278]: USER_ACCT pid=5278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.520621 sshd[5278]: Accepted publickey for core from 68.220.241.50 port 60086 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:42.525972 kernel: audit: type=1101 audit(1768828002.516:820): pid=5278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.529915 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:42.524000 audit[5278]: CRED_ACQ pid=5278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.539039 kernel: audit: type=1103 audit(1768828002.524:821): pid=5278 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.525000 audit[5278]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7c67e340 a2=3 a3=0 items=0 ppid=1 pid=5278 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:42.544640 kernel: audit: type=1006 audit(1768828002.525:822): pid=5278 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 19 13:06:42.544754 kernel: audit: type=1300 audit(1768828002.525:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd7c67e340 a2=3 a3=0 items=0 ppid=1 pid=5278 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:42.525000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:42.553878 kernel: audit: type=1327 audit(1768828002.525:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:42.558245 systemd-logind[1621]: New session 18 of user core. Jan 19 13:06:42.564167 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 19 13:06:42.572000 audit[5278]: USER_START pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.581021 kernel: audit: type=1105 audit(1768828002.572:823): pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.581000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.592917 kernel: audit: type=1103 audit(1768828002.581:824): pid=5282 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.928050 sshd[5282]: Connection closed by 68.220.241.50 port 60086 Jan 19 13:06:42.928520 sshd-session[5278]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:42.943911 kernel: audit: type=1106 audit(1768828002.930:825): pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.930000 audit[5278]: USER_END pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.950515 systemd[1]: sshd@20-10.243.74.46:22-68.220.241.50:60086.service: Deactivated successfully. Jan 19 13:06:42.956786 systemd[1]: session-18.scope: Deactivated successfully. Jan 19 13:06:42.966041 kernel: audit: type=1104 audit(1768828002.942:826): pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.942000 audit[5278]: CRED_DISP pid=5278 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:42.960143 systemd-logind[1621]: Session 18 logged out. Waiting for processes to exit. Jan 19 13:06:42.970706 systemd-logind[1621]: Removed session 18. Jan 19 13:06:42.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.243.74.46:22-68.220.241.50:60086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:47.098092 kubelet[2939]: E0119 13:06:47.097977 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:06:48.047529 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:48.047992 kernel: audit: type=1130 audit(1768828008.033:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.243.74.46:22-68.220.241.50:41666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:48.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.243.74.46:22-68.220.241.50:41666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:48.034248 systemd[1]: Started sshd@21-10.243.74.46:22-68.220.241.50:41666.service - OpenSSH per-connection server daemon (68.220.241.50:41666). Jan 19 13:06:48.619000 audit[5298]: USER_ACCT pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.630243 sshd[5298]: Accepted publickey for core from 68.220.241.50 port 41666 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:48.631742 kernel: audit: type=1101 audit(1768828008.619:829): pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.634607 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:48.631000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.646067 kernel: audit: type=1103 audit(1768828008.631:830): pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.651774 kernel: audit: type=1006 audit(1768828008.631:831): pid=5298 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 19 13:06:48.651902 kernel: audit: type=1300 audit(1768828008.631:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec8249c00 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:48.631000 audit[5298]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffec8249c00 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:48.658842 kernel: audit: type=1327 audit(1768828008.631:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:48.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:48.654493 systemd-logind[1621]: New session 19 of user core. Jan 19 13:06:48.663161 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 19 13:06:48.670000 audit[5298]: USER_START pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.679861 kernel: audit: type=1105 audit(1768828008.670:832): pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.680023 kernel: audit: type=1103 audit(1768828008.677:833): pid=5303 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:48.677000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:49.074963 sshd[5303]: Connection closed by 68.220.241.50 port 41666 Jan 19 13:06:49.080923 sshd-session[5298]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:49.085000 audit[5298]: USER_END pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:49.099274 kernel: audit: type=1106 audit(1768828009.085:834): pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:49.103414 systemd[1]: sshd@21-10.243.74.46:22-68.220.241.50:41666.service: Deactivated successfully. Jan 19 13:06:49.085000 audit[5298]: CRED_DISP pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:49.110734 systemd[1]: session-19.scope: Deactivated successfully. Jan 19 13:06:49.113894 kernel: audit: type=1104 audit(1768828009.085:835): pid=5298 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:49.117326 systemd-logind[1621]: Session 19 logged out. Waiting for processes to exit. Jan 19 13:06:49.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.243.74.46:22-68.220.241.50:41666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:49.119897 systemd-logind[1621]: Removed session 19. Jan 19 13:06:52.099605 kubelet[2939]: E0119 13:06:52.099516 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:06:52.103164 kubelet[2939]: E0119 13:06:52.103116 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:06:53.095785 kubelet[2939]: E0119 13:06:53.095616 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:06:53.097131 kubelet[2939]: E0119 13:06:53.095644 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:06:54.180174 systemd[1]: Started sshd@22-10.243.74.46:22-68.220.241.50:40604.service - OpenSSH per-connection server daemon (68.220.241.50:40604). Jan 19 13:06:54.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.243.74.46:22-68.220.241.50:40604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:54.188064 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:06:54.188158 kernel: audit: type=1130 audit(1768828014.179:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.243.74.46:22-68.220.241.50:40604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:54.759000 audit[5315]: USER_ACCT pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.772382 kernel: audit: type=1101 audit(1768828014.759:838): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.772475 kernel: audit: type=1103 audit(1768828014.766:839): pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.766000 audit[5315]: CRED_ACQ pid=5315 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.771763 sshd-session[5315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:54.773809 sshd[5315]: Accepted publickey for core from 68.220.241.50 port 40604 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:54.780844 kernel: audit: type=1006 audit(1768828014.766:840): pid=5315 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 19 13:06:54.766000 audit[5315]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2c43da50 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:54.789195 kernel: audit: type=1300 audit(1768828014.766:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd2c43da50 a2=3 a3=0 items=0 ppid=1 pid=5315 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:54.766000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:54.794833 kernel: audit: type=1327 audit(1768828014.766:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:54.801133 systemd-logind[1621]: New session 20 of user core. Jan 19 13:06:54.809234 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 19 13:06:54.816000 audit[5315]: USER_START pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.823864 kernel: audit: type=1105 audit(1768828014.816:841): pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.825000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:54.832888 kernel: audit: type=1103 audit(1768828014.825:842): pid=5319 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.192691 sshd[5319]: Connection closed by 68.220.241.50 port 40604 Jan 19 13:06:55.193212 sshd-session[5315]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:55.196000 audit[5315]: USER_END pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.211410 kernel: audit: type=1106 audit(1768828015.196:843): pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.209960 systemd[1]: sshd@22-10.243.74.46:22-68.220.241.50:40604.service: Deactivated successfully. Jan 19 13:06:55.204000 audit[5315]: CRED_DISP pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.220702 kernel: audit: type=1104 audit(1768828015.204:844): pid=5315 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.220455 systemd[1]: session-20.scope: Deactivated successfully. Jan 19 13:06:55.224102 systemd-logind[1621]: Session 20 logged out. Waiting for processes to exit. Jan 19 13:06:55.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.243.74.46:22-68.220.241.50:40604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:55.228042 systemd-logind[1621]: Removed session 20. Jan 19 13:06:55.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.243.74.46:22-68.220.241.50:40608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:55.302214 systemd[1]: Started sshd@23-10.243.74.46:22-68.220.241.50:40608.service - OpenSSH per-connection server daemon (68.220.241.50:40608). Jan 19 13:06:55.839000 audit[5331]: USER_ACCT pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.840165 sshd[5331]: Accepted publickey for core from 68.220.241.50 port 40608 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:55.843000 audit[5331]: CRED_ACQ pid=5331 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.844000 audit[5331]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd9d4a3ad0 a2=3 a3=0 items=0 ppid=1 pid=5331 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:55.844000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:55.847956 sshd-session[5331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:55.863079 systemd-logind[1621]: New session 21 of user core. Jan 19 13:06:55.871047 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 19 13:06:55.876000 audit[5331]: USER_START pid=5331 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:55.881000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:56.102094 containerd[1646]: time="2026-01-19T13:06:56.101766860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 19 13:06:56.436567 containerd[1646]: time="2026-01-19T13:06:56.436365152Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:56.442607 containerd[1646]: time="2026-01-19T13:06:56.442541566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 19 13:06:56.444078 containerd[1646]: time="2026-01-19T13:06:56.442557942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:56.444208 kubelet[2939]: E0119 13:06:56.444134 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:06:56.444798 kubelet[2939]: E0119 13:06:56.444245 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 19 13:06:56.444798 kubelet[2939]: E0119 13:06:56.444618 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hh5hv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-nlspt_calico-system(2f32ab2a-e7b2-4a72-8b17-d785aad340e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:56.447093 kubelet[2939]: E0119 13:06:56.447047 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:06:56.695029 sshd[5335]: Connection closed by 68.220.241.50 port 40608 Jan 19 13:06:56.697506 sshd-session[5331]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:56.704000 audit[5331]: USER_END pid=5331 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:56.704000 audit[5331]: CRED_DISP pid=5331 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:56.710306 systemd[1]: sshd@23-10.243.74.46:22-68.220.241.50:40608.service: Deactivated successfully. Jan 19 13:06:56.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.243.74.46:22-68.220.241.50:40608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:56.716734 systemd[1]: session-21.scope: Deactivated successfully. Jan 19 13:06:56.722694 systemd-logind[1621]: Session 21 logged out. Waiting for processes to exit. Jan 19 13:06:56.725002 systemd-logind[1621]: Removed session 21. Jan 19 13:06:56.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.243.74.46:22-68.220.241.50:40622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:56.798393 systemd[1]: Started sshd@24-10.243.74.46:22-68.220.241.50:40622.service - OpenSSH per-connection server daemon (68.220.241.50:40622). Jan 19 13:06:57.391000 audit[5346]: USER_ACCT pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:57.393406 sshd[5346]: Accepted publickey for core from 68.220.241.50 port 40622 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:57.393000 audit[5346]: CRED_ACQ pid=5346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:57.394000 audit[5346]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda33c8770 a2=3 a3=0 items=0 ppid=1 pid=5346 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:57.394000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:57.397023 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:57.414924 systemd-logind[1621]: New session 22 of user core. Jan 19 13:06:57.420114 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 19 13:06:57.426000 audit[5346]: USER_START pid=5346 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:57.432000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:58.824000 audit[5366]: NETFILTER_CFG table=filter:139 family=2 entries=26 op=nft_register_rule pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:06:58.824000 audit[5366]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffcfc596f70 a2=0 a3=7ffcfc596f5c items=0 ppid=3085 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:58.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:06:58.832000 audit[5366]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:06:58.832000 audit[5366]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffcfc596f70 a2=0 a3=0 items=0 ppid=3085 pid=5366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:58.832000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:06:58.861799 sshd[5350]: Connection closed by 68.220.241.50 port 40622 Jan 19 13:06:58.862987 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jan 19 13:06:58.867000 audit[5346]: USER_END pid=5346 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:58.871000 audit[5346]: CRED_DISP pid=5346 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:58.884377 systemd[1]: sshd@24-10.243.74.46:22-68.220.241.50:40622.service: Deactivated successfully. Jan 19 13:06:58.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.243.74.46:22-68.220.241.50:40622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:58.890000 audit[5368]: NETFILTER_CFG table=filter:141 family=2 entries=38 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:06:58.890000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe3cbe23b0 a2=0 a3=7ffe3cbe239c items=0 ppid=3085 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:58.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:06:58.894705 systemd[1]: session-22.scope: Deactivated successfully. Jan 19 13:06:58.900997 systemd-logind[1621]: Session 22 logged out. Waiting for processes to exit. Jan 19 13:06:58.903000 audit[5368]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=5368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:06:58.903000 audit[5368]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe3cbe23b0 a2=0 a3=0 items=0 ppid=3085 pid=5368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:58.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:06:58.904869 systemd-logind[1621]: Removed session 22. Jan 19 13:06:58.969283 systemd[1]: Started sshd@25-10.243.74.46:22-68.220.241.50:40624.service - OpenSSH per-connection server daemon (68.220.241.50:40624). Jan 19 13:06:58.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.243.74.46:22-68.220.241.50:40624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:59.095618 containerd[1646]: time="2026-01-19T13:06:59.094797747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 19 13:06:59.272800 systemd[1]: Started sshd@26-10.243.74.46:22-188.166.92.220:57818.service - OpenSSH per-connection server daemon (188.166.92.220:57818). Jan 19 13:06:59.281342 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 19 13:06:59.281447 kernel: audit: type=1130 audit(1768828019.272:869): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.243.74.46:22-188.166.92.220:57818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:59.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.243.74.46:22-188.166.92.220:57818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:59.419692 containerd[1646]: time="2026-01-19T13:06:59.419598551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:59.420993 containerd[1646]: time="2026-01-19T13:06:59.420944246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 19 13:06:59.421088 containerd[1646]: time="2026-01-19T13:06:59.421048712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:59.421356 kubelet[2939]: E0119 13:06:59.421287 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:06:59.421957 kubelet[2939]: E0119 13:06:59.421378 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 19 13:06:59.421957 kubelet[2939]: E0119 13:06:59.421562 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:818d65b8ce8c49218c255fe0e28b9b06,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:59.428215 containerd[1646]: time="2026-01-19T13:06:59.427576153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 19 13:06:59.534000 audit[5373]: USER_ACCT pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.541859 kernel: audit: type=1101 audit(1768828019.534:870): pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.541955 sshd[5373]: Accepted publickey for core from 68.220.241.50 port 40624 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:06:59.546399 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:06:59.542000 audit[5373]: CRED_ACQ pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.552862 kernel: audit: type=1103 audit(1768828019.542:871): pid=5373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.563050 kernel: audit: type=1006 audit(1768828019.542:872): pid=5373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 19 13:06:59.565122 systemd-logind[1621]: New session 23 of user core. Jan 19 13:06:59.542000 audit[5373]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4c7b9e0 a2=3 a3=0 items=0 ppid=1 pid=5373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:59.572856 kernel: audit: type=1300 audit(1768828019.542:872): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffda4c7b9e0 a2=3 a3=0 items=0 ppid=1 pid=5373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:06:59.578206 kernel: audit: type=1327 audit(1768828019.542:872): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:59.542000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:06:59.577438 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 19 13:06:59.583000 audit[5373]: USER_START pid=5373 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.594235 kernel: audit: type=1105 audit(1768828019.583:873): pid=5373 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.594332 kernel: audit: type=1103 audit(1768828019.590:874): pid=5386 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.590000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:06:59.688417 sshd[5382]: Connection closed by authenticating user root 188.166.92.220 port 57818 [preauth] Jan 19 13:06:59.690000 audit[5382]: USER_ERR pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:06:59.696987 kernel: audit: type=1109 audit(1768828019.690:875): pid=5382 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=188.166.92.220 addr=188.166.92.220 terminal=ssh res=failed' Jan 19 13:06:59.700143 systemd[1]: sshd@26-10.243.74.46:22-188.166.92.220:57818.service: Deactivated successfully. Jan 19 13:06:59.709854 kernel: audit: type=1131 audit(1768828019.699:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.243.74.46:22-188.166.92.220:57818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:59.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.243.74.46:22-188.166.92.220:57818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:06:59.766340 containerd[1646]: time="2026-01-19T13:06:59.766104240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:06:59.773136 containerd[1646]: time="2026-01-19T13:06:59.772657626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 19 13:06:59.773437 containerd[1646]: time="2026-01-19T13:06:59.773118104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 19 13:06:59.775680 kubelet[2939]: E0119 13:06:59.774860 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:06:59.775680 kubelet[2939]: E0119 13:06:59.774940 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 19 13:06:59.775680 kubelet[2939]: E0119 13:06:59.775102 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nq6h4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-795b45d676-wvh8b_calico-system(7526f50b-859a-4390-ae5e-37e152f03638): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 19 13:06:59.776940 kubelet[2939]: E0119 13:06:59.776801 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:07:00.241731 sshd[5386]: Connection closed by 68.220.241.50 port 40624 Jan 19 13:07:00.243105 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Jan 19 13:07:00.244000 audit[5373]: USER_END pid=5373 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:00.244000 audit[5373]: CRED_DISP pid=5373 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:00.249592 systemd-logind[1621]: Session 23 logged out. Waiting for processes to exit. Jan 19 13:07:00.252562 systemd[1]: sshd@25-10.243.74.46:22-68.220.241.50:40624.service: Deactivated successfully. Jan 19 13:07:00.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.243.74.46:22-68.220.241.50:40624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:00.258138 systemd[1]: session-23.scope: Deactivated successfully. Jan 19 13:07:00.262626 systemd-logind[1621]: Removed session 23. Jan 19 13:07:00.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.243.74.46:22-68.220.241.50:40638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:00.348212 systemd[1]: Started sshd@27-10.243.74.46:22-68.220.241.50:40638.service - OpenSSH per-connection server daemon (68.220.241.50:40638). Jan 19 13:07:00.877000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:00.878431 sshd[5398]: Accepted publickey for core from 68.220.241.50 port 40638 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:07:00.879000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:00.879000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc96390360 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:00.879000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:00.881487 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:07:00.889580 systemd-logind[1621]: New session 24 of user core. Jan 19 13:07:00.898427 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 19 13:07:00.903000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:00.907000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:01.269023 sshd[5402]: Connection closed by 68.220.241.50 port 40638 Jan 19 13:07:01.270526 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Jan 19 13:07:01.274000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:01.274000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:01.279869 systemd-logind[1621]: Session 24 logged out. Waiting for processes to exit. Jan 19 13:07:01.280622 systemd[1]: sshd@27-10.243.74.46:22-68.220.241.50:40638.service: Deactivated successfully. Jan 19 13:07:01.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.243.74.46:22-68.220.241.50:40638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:01.283991 systemd[1]: session-24.scope: Deactivated successfully. Jan 19 13:07:01.287140 systemd-logind[1621]: Removed session 24. Jan 19 13:07:04.096498 kubelet[2939]: E0119 13:07:04.094936 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:07:05.095563 kubelet[2939]: E0119 13:07:05.095494 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c" Jan 19 13:07:06.378474 systemd[1]: Started sshd@28-10.243.74.46:22-68.220.241.50:35996.service - OpenSSH per-connection server daemon (68.220.241.50:35996). Jan 19 13:07:06.382253 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 19 13:07:06.382433 kernel: audit: type=1130 audit(1768828026.376:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.243.74.46:22-68.220.241.50:35996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:06.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.243.74.46:22-68.220.241.50:35996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:06.928648 sshd[5440]: Accepted publickey for core from 68.220.241.50 port 35996 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:07:06.941137 kernel: audit: type=1101 audit(1768828026.926:890): pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.926000 audit[5440]: USER_ACCT pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.945642 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:07:06.940000 audit[5440]: CRED_ACQ pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.950840 kernel: audit: type=1103 audit(1768828026.940:891): pid=5440 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.954841 kernel: audit: type=1006 audit(1768828026.940:892): pid=5440 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 19 13:07:06.940000 audit[5440]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3aa36960 a2=3 a3=0 items=0 ppid=1 pid=5440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:06.940000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:06.963852 kernel: audit: type=1300 audit(1768828026.940:892): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc3aa36960 a2=3 a3=0 items=0 ppid=1 pid=5440 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:06.963941 kernel: audit: type=1327 audit(1768828026.940:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:06.971174 systemd-logind[1621]: New session 25 of user core. Jan 19 13:07:06.979799 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 19 13:07:06.985000 audit[5440]: USER_START pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.994843 kernel: audit: type=1105 audit(1768828026.985:893): pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:06.993000 audit[5444]: CRED_ACQ pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.000840 kernel: audit: type=1103 audit(1768828026.993:894): pid=5444 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.097491 containerd[1646]: time="2026-01-19T13:07:07.097099019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 19 13:07:07.101408 kubelet[2939]: E0119 13:07:07.101064 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:07:07.413705 containerd[1646]: time="2026-01-19T13:07:07.413361601Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:07:07.416834 containerd[1646]: time="2026-01-19T13:07:07.414911918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 19 13:07:07.416834 containerd[1646]: time="2026-01-19T13:07:07.414971245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 19 13:07:07.417260 kubelet[2939]: E0119 13:07:07.417172 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:07:07.417665 kubelet[2939]: E0119 13:07:07.417636 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 19 13:07:07.418604 kubelet[2939]: E0119 13:07:07.418524 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 19 13:07:07.421320 containerd[1646]: time="2026-01-19T13:07:07.421168764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 19 13:07:07.434206 sshd[5444]: Connection closed by 68.220.241.50 port 35996 Jan 19 13:07:07.435484 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Jan 19 13:07:07.437000 audit[5440]: USER_END pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.454880 kernel: audit: type=1106 audit(1768828027.437:895): pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.455127 kernel: audit: type=1104 audit(1768828027.437:896): pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.437000 audit[5440]: CRED_DISP pid=5440 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:07.459748 systemd[1]: sshd@28-10.243.74.46:22-68.220.241.50:35996.service: Deactivated successfully. Jan 19 13:07:07.469078 systemd[1]: session-25.scope: Deactivated successfully. Jan 19 13:07:07.472860 systemd-logind[1621]: Session 25 logged out. Waiting for processes to exit. Jan 19 13:07:07.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.243.74.46:22-68.220.241.50:35996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:07.477940 systemd-logind[1621]: Removed session 25. Jan 19 13:07:07.758938 containerd[1646]: time="2026-01-19T13:07:07.757413598Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:07:07.759941 containerd[1646]: time="2026-01-19T13:07:07.759590951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 19 13:07:07.759941 containerd[1646]: time="2026-01-19T13:07:07.759607258Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 19 13:07:07.760485 kubelet[2939]: E0119 13:07:07.760398 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:07:07.760804 kubelet[2939]: E0119 13:07:07.760688 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 19 13:07:07.761565 kubelet[2939]: E0119 13:07:07.761479 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fkt48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-tzctk_calico-system(9533a5f4-a04a-442d-b08c-488e8c9d1e7c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 19 13:07:07.763173 kubelet[2939]: E0119 13:07:07.763068 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-tzctk" podUID="9533a5f4-a04a-442d-b08c-488e8c9d1e7c" Jan 19 13:07:08.687000 audit[5462]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:07:08.687000 audit[5462]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe0e791c60 a2=0 a3=7ffe0e791c4c items=0 ppid=3085 pid=5462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:08.687000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:07:08.695000 audit[5462]: NETFILTER_CFG table=nat:144 family=2 entries=104 op=nft_register_chain pid=5462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 19 13:07:08.695000 audit[5462]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe0e791c60 a2=0 a3=7ffe0e791c4c items=0 ppid=3085 pid=5462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:08.695000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 19 13:07:11.094691 kubelet[2939]: E0119 13:07:11.094632 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-nlspt" podUID="2f32ab2a-e7b2-4a72-8b17-d785aad340e2" Jan 19 13:07:12.558180 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 19 13:07:12.558430 kernel: audit: type=1130 audit(1768828032.539:900): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.243.74.46:22-68.220.241.50:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:12.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.243.74.46:22-68.220.241.50:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:12.541264 systemd[1]: Started sshd@29-10.243.74.46:22-68.220.241.50:33932.service - OpenSSH per-connection server daemon (68.220.241.50:33932). Jan 19 13:07:13.098739 kubelet[2939]: E0119 13:07:13.098635 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-795b45d676-wvh8b" podUID="7526f50b-859a-4390-ae5e-37e152f03638" Jan 19 13:07:13.102931 sshd[5479]: Accepted publickey for core from 68.220.241.50 port 33932 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:07:13.115152 kernel: audit: type=1101 audit(1768828033.100:901): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.100000 audit[5479]: USER_ACCT pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.119111 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:07:13.132256 kernel: audit: type=1103 audit(1768828033.112:902): pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.112000 audit[5479]: CRED_ACQ pid=5479 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.138959 kernel: audit: type=1006 audit(1768828033.113:903): pid=5479 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 19 13:07:13.113000 audit[5479]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfda0dc00 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:13.146255 kernel: audit: type=1300 audit(1768828033.113:903): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcfda0dc00 a2=3 a3=0 items=0 ppid=1 pid=5479 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:13.146376 kernel: audit: type=1327 audit(1768828033.113:903): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:13.113000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:13.154263 systemd-logind[1621]: New session 26 of user core. Jan 19 13:07:13.161211 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 19 13:07:13.168000 audit[5479]: USER_START pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.177843 kernel: audit: type=1105 audit(1768828033.168:904): pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.177000 audit[5483]: CRED_ACQ pid=5483 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.184891 kernel: audit: type=1103 audit(1768828033.177:905): pid=5483 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.567351 sshd[5483]: Connection closed by 68.220.241.50 port 33932 Jan 19 13:07:13.568180 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jan 19 13:07:13.571000 audit[5479]: USER_END pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.583088 systemd[1]: sshd@29-10.243.74.46:22-68.220.241.50:33932.service: Deactivated successfully. Jan 19 13:07:13.585861 kernel: audit: type=1106 audit(1768828033.571:906): pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.589471 systemd[1]: session-26.scope: Deactivated successfully. Jan 19 13:07:13.593036 systemd-logind[1621]: Session 26 logged out. Waiting for processes to exit. Jan 19 13:07:13.571000 audit[5479]: CRED_DISP pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.600062 kernel: audit: type=1104 audit(1768828033.571:907): pid=5479 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:13.599800 systemd-logind[1621]: Removed session 26. Jan 19 13:07:13.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.243.74.46:22-68.220.241.50:33932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:16.095991 containerd[1646]: time="2026-01-19T13:07:16.095214126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:07:16.432389 containerd[1646]: time="2026-01-19T13:07:16.432308002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:07:16.434884 containerd[1646]: time="2026-01-19T13:07:16.433792025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:07:16.435036 containerd[1646]: time="2026-01-19T13:07:16.433795023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:07:16.443763 kubelet[2939]: E0119 13:07:16.436507 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:07:16.443763 kubelet[2939]: E0119 13:07:16.443555 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:07:16.445107 kubelet[2939]: E0119 13:07:16.444992 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8ktl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-dx6t6_calico-apiserver(ed22f491-3777-46a9-8e11-3aad3f6a2fdc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:07:16.446635 kubelet[2939]: E0119 13:07:16.446545 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-dx6t6" podUID="ed22f491-3777-46a9-8e11-3aad3f6a2fdc" Jan 19 13:07:18.671025 systemd[1]: Started sshd@30-10.243.74.46:22-68.220.241.50:33942.service - OpenSSH per-connection server daemon (68.220.241.50:33942). Jan 19 13:07:18.681676 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 19 13:07:18.681859 kernel: audit: type=1130 audit(1768828038.669:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.243.74.46:22-68.220.241.50:33942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:18.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.243.74.46:22-68.220.241.50:33942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:19.096995 containerd[1646]: time="2026-01-19T13:07:19.096194029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 19 13:07:19.223000 audit[5497]: USER_ACCT pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.233573 sshd[5497]: Accepted publickey for core from 68.220.241.50 port 33942 ssh2: RSA SHA256:XH08H34FjEjRY26kmXoz5PK9rGc+M72aUjiCPhjaV7Q Jan 19 13:07:19.244469 kernel: audit: type=1101 audit(1768828039.223:910): pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.244628 kernel: audit: type=1103 audit(1768828039.235:911): pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.235000 audit[5497]: CRED_ACQ pid=5497 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.240913 sshd-session[5497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 19 13:07:19.250840 kernel: audit: type=1006 audit(1768828039.235:912): pid=5497 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 19 13:07:19.235000 audit[5497]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca7058520 a2=3 a3=0 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:19.256844 kernel: audit: type=1300 audit(1768828039.235:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca7058520 a2=3 a3=0 items=0 ppid=1 pid=5497 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 19 13:07:19.235000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:19.259840 kernel: audit: type=1327 audit(1768828039.235:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 19 13:07:19.263278 systemd-logind[1621]: New session 27 of user core. Jan 19 13:07:19.268383 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 19 13:07:19.276000 audit[5497]: USER_START pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.285428 kernel: audit: type=1105 audit(1768828039.276:913): pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.284000 audit[5501]: CRED_ACQ pid=5501 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.291858 kernel: audit: type=1103 audit(1768828039.284:914): pid=5501 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.476694 containerd[1646]: time="2026-01-19T13:07:19.476239566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:07:19.481847 containerd[1646]: time="2026-01-19T13:07:19.480005998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 19 13:07:19.482542 containerd[1646]: time="2026-01-19T13:07:19.482122647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 19 13:07:19.482917 kubelet[2939]: E0119 13:07:19.482807 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:07:19.484696 kubelet[2939]: E0119 13:07:19.483525 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 19 13:07:19.485890 kubelet[2939]: E0119 13:07:19.485722 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jfnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-77b44585c9-kqfd8_calico-system(2a6c0b6c-6346-4ea5-adab-326e38e7dbe6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 19 13:07:19.487116 kubelet[2939]: E0119 13:07:19.486983 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-77b44585c9-kqfd8" podUID="2a6c0b6c-6346-4ea5-adab-326e38e7dbe6" Jan 19 13:07:19.744943 sshd[5501]: Connection closed by 68.220.241.50 port 33942 Jan 19 13:07:19.746129 sshd-session[5497]: pam_unix(sshd:session): session closed for user core Jan 19 13:07:19.749000 audit[5497]: USER_END pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.761859 kernel: audit: type=1106 audit(1768828039.749:915): pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.749000 audit[5497]: CRED_DISP pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.769433 systemd[1]: sshd@30-10.243.74.46:22-68.220.241.50:33942.service: Deactivated successfully. Jan 19 13:07:19.772746 kernel: audit: type=1104 audit(1768828039.749:916): pid=5497 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 19 13:07:19.773450 systemd[1]: session-27.scope: Deactivated successfully. Jan 19 13:07:19.775440 systemd-logind[1621]: Session 27 logged out. Waiting for processes to exit. Jan 19 13:07:19.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.243.74.46:22-68.220.241.50:33942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 19 13:07:19.779776 systemd-logind[1621]: Removed session 27. Jan 19 13:07:20.098497 containerd[1646]: time="2026-01-19T13:07:20.097134455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 19 13:07:20.417646 containerd[1646]: time="2026-01-19T13:07:20.417264376Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 19 13:07:20.419046 containerd[1646]: time="2026-01-19T13:07:20.418989667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 19 13:07:20.419632 containerd[1646]: time="2026-01-19T13:07:20.419144773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 19 13:07:20.419935 kubelet[2939]: E0119 13:07:20.419852 2939 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:07:20.420494 kubelet[2939]: E0119 13:07:20.420377 2939 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 19 13:07:20.421006 kubelet[2939]: E0119 13:07:20.420872 2939 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7rrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-77bb946844-22dld_calico-apiserver(207bff47-91b8-40f6-a83c-1de3cb3c792c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 19 13:07:20.422544 kubelet[2939]: E0119 13:07:20.422319 2939 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-77bb946844-22dld" podUID="207bff47-91b8-40f6-a83c-1de3cb3c792c"