Jan 14 13:35:07.253926 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 11:12:50 -00 2026 Jan 14 13:35:07.253967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:35:07.253981 kernel: BIOS-provided physical RAM map: Jan 14 13:35:07.253991 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 13:35:07.254005 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 13:35:07.254016 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 13:35:07.254028 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 14 13:35:07.254046 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 14 13:35:07.254057 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 13:35:07.254068 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 13:35:07.254079 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 13:35:07.254089 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 13:35:07.254100 kernel: NX (Execute Disable) protection: active Jan 14 13:35:07.254116 kernel: APIC: Static calls initialized Jan 14 13:35:07.254128 kernel: SMBIOS 2.8 present. Jan 14 13:35:07.254140 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 14 13:35:07.254152 kernel: DMI: Memory slots populated: 1/1 Jan 14 13:35:07.254179 kernel: Hypervisor detected: KVM Jan 14 13:35:07.254190 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 13:35:07.254201 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 13:35:07.254213 kernel: kvm-clock: using sched offset of 4994946115 cycles Jan 14 13:35:07.254225 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 13:35:07.254249 kernel: tsc: Detected 2799.998 MHz processor Jan 14 13:35:07.254261 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 13:35:07.254273 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 13:35:07.254288 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 13:35:07.254300 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 13:35:07.254312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 13:35:07.254324 kernel: Using GB pages for direct mapping Jan 14 13:35:07.254348 kernel: ACPI: Early table checksum verification disabled Jan 14 13:35:07.254359 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 14 13:35:07.254371 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254382 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254410 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254422 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 14 13:35:07.254434 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254445 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254457 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254469 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 13:35:07.254481 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 14 13:35:07.254501 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 14 13:35:07.254513 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 14 13:35:07.254525 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 14 13:35:07.254538 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 14 13:35:07.254554 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 14 13:35:07.254566 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 14 13:35:07.254589 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 13:35:07.254601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 14 13:35:07.254612 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 14 13:35:07.254624 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 14 13:35:07.254635 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 14 13:35:07.254650 kernel: Zone ranges: Jan 14 13:35:07.254662 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 13:35:07.254673 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 14 13:35:07.254685 kernel: Normal empty Jan 14 13:35:07.254696 kernel: Device empty Jan 14 13:35:07.254708 kernel: Movable zone start for each node Jan 14 13:35:07.254719 kernel: Early memory node ranges Jan 14 13:35:07.254730 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 13:35:07.254745 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 14 13:35:07.261505 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 14 13:35:07.261525 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 13:35:07.261538 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 13:35:07.261551 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 14 13:35:07.261563 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 13:35:07.261582 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 13:35:07.261603 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 13:35:07.261615 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 13:35:07.261628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 13:35:07.261640 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 13:35:07.261652 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 13:35:07.261665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 13:35:07.261677 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 13:35:07.261693 kernel: TSC deadline timer available Jan 14 13:35:07.261705 kernel: CPU topo: Max. logical packages: 16 Jan 14 13:35:07.261718 kernel: CPU topo: Max. logical dies: 16 Jan 14 13:35:07.261730 kernel: CPU topo: Max. dies per package: 1 Jan 14 13:35:07.261742 kernel: CPU topo: Max. threads per core: 1 Jan 14 13:35:07.261777 kernel: CPU topo: Num. cores per package: 1 Jan 14 13:35:07.261791 kernel: CPU topo: Num. threads per package: 1 Jan 14 13:35:07.261803 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 14 13:35:07.261821 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 13:35:07.261833 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 13:35:07.261845 kernel: Booting paravirtualized kernel on KVM Jan 14 13:35:07.261857 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 13:35:07.261870 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 14 13:35:07.261882 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 14 13:35:07.261894 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 14 13:35:07.261911 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 14 13:35:07.261923 kernel: kvm-guest: PV spinlocks enabled Jan 14 13:35:07.261935 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 13:35:07.261949 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:35:07.261961 kernel: random: crng init done Jan 14 13:35:07.261974 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 13:35:07.261986 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 13:35:07.262002 kernel: Fallback order for Node 0: 0 Jan 14 13:35:07.262014 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 14 13:35:07.262027 kernel: Policy zone: DMA32 Jan 14 13:35:07.262039 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 13:35:07.262051 kernel: software IO TLB: area num 16. Jan 14 13:35:07.262064 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 14 13:35:07.262076 kernel: Kernel/User page tables isolation: enabled Jan 14 13:35:07.262092 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 13:35:07.262104 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 13:35:07.262117 kernel: Dynamic Preempt: voluntary Jan 14 13:35:07.262129 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 13:35:07.262142 kernel: rcu: RCU event tracing is enabled. Jan 14 13:35:07.262154 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 14 13:35:07.262166 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 13:35:07.262183 kernel: Rude variant of Tasks RCU enabled. Jan 14 13:35:07.262195 kernel: Tracing variant of Tasks RCU enabled. Jan 14 13:35:07.262208 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 13:35:07.262220 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 14 13:35:07.262232 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 13:35:07.262244 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 13:35:07.262256 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 13:35:07.262269 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 14 13:35:07.262285 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 13:35:07.262308 kernel: Console: colour VGA+ 80x25 Jan 14 13:35:07.262325 kernel: printk: legacy console [tty0] enabled Jan 14 13:35:07.262338 kernel: printk: legacy console [ttyS0] enabled Jan 14 13:35:07.262356 kernel: ACPI: Core revision 20240827 Jan 14 13:35:07.262370 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 13:35:07.262382 kernel: x2apic enabled Jan 14 13:35:07.262395 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 13:35:07.262409 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 13:35:07.262426 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 14 13:35:07.262439 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 13:35:07.262452 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 13:35:07.262465 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 13:35:07.262481 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 13:35:07.262494 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 13:35:07.262506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 13:35:07.262519 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 13:35:07.262532 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 13:35:07.262544 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 13:35:07.262556 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 13:35:07.262569 kernel: MMIO Stale Data: Unknown: No mitigations Jan 14 13:35:07.262581 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 14 13:35:07.262593 kernel: active return thunk: its_return_thunk Jan 14 13:35:07.262606 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 13:35:07.262622 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 13:35:07.262635 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 13:35:07.262648 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 13:35:07.262660 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 13:35:07.262672 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 13:35:07.262685 kernel: Freeing SMP alternatives memory: 32K Jan 14 13:35:07.262697 kernel: pid_max: default: 32768 minimum: 301 Jan 14 13:35:07.262710 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 13:35:07.262722 kernel: landlock: Up and running. Jan 14 13:35:07.262734 kernel: SELinux: Initializing. Jan 14 13:35:07.262764 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 13:35:07.262787 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 13:35:07.262799 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 14 13:35:07.262812 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 14 13:35:07.262825 kernel: signal: max sigframe size: 1776 Jan 14 13:35:07.262838 kernel: rcu: Hierarchical SRCU implementation. Jan 14 13:35:07.262851 kernel: rcu: Max phase no-delay instances is 400. Jan 14 13:35:07.262864 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 14 13:35:07.262883 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 13:35:07.262896 kernel: smp: Bringing up secondary CPUs ... Jan 14 13:35:07.262908 kernel: smpboot: x86: Booting SMP configuration: Jan 14 13:35:07.262921 kernel: .... node #0, CPUs: #1 Jan 14 13:35:07.262934 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 13:35:07.262946 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 14 13:35:07.262960 kernel: Memory: 1912060K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 178540K reserved, 0K cma-reserved) Jan 14 13:35:07.262977 kernel: devtmpfs: initialized Jan 14 13:35:07.262990 kernel: x86/mm: Memory block size: 128MB Jan 14 13:35:07.263003 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 13:35:07.263016 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 14 13:35:07.263029 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 13:35:07.263042 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 13:35:07.263054 kernel: audit: initializing netlink subsys (disabled) Jan 14 13:35:07.263071 kernel: audit: type=2000 audit(1768397703.715:1): state=initialized audit_enabled=0 res=1 Jan 14 13:35:07.263084 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 13:35:07.263097 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 13:35:07.263110 kernel: cpuidle: using governor menu Jan 14 13:35:07.263123 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 13:35:07.263135 kernel: dca service started, version 1.12.1 Jan 14 13:35:07.263154 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 13:35:07.263172 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 13:35:07.263185 kernel: PCI: Using configuration type 1 for base access Jan 14 13:35:07.263198 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 13:35:07.263210 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 13:35:07.263223 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 13:35:07.263236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 13:35:07.263249 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 13:35:07.263265 kernel: ACPI: Added _OSI(Module Device) Jan 14 13:35:07.263279 kernel: ACPI: Added _OSI(Processor Device) Jan 14 13:35:07.263291 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 13:35:07.263304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 13:35:07.263329 kernel: ACPI: Interpreter enabled Jan 14 13:35:07.263341 kernel: ACPI: PM: (supports S0 S5) Jan 14 13:35:07.263353 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 13:35:07.263366 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 13:35:07.263382 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 13:35:07.263407 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 13:35:07.263420 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 13:35:07.265820 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 13:35:07.266071 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 13:35:07.266294 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 13:35:07.266322 kernel: PCI host bridge to bus 0000:00 Jan 14 13:35:07.266556 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 13:35:07.266783 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 13:35:07.266987 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 13:35:07.267183 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 14 13:35:07.267386 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 13:35:07.267581 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 14 13:35:07.270721 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 13:35:07.271090 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 13:35:07.271337 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 14 13:35:07.271556 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 14 13:35:07.271819 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 14 13:35:07.272068 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 14 13:35:07.272301 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 13:35:07.272579 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.277898 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 14 13:35:07.278304 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 13:35:07.279856 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 13:35:07.280087 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 13:35:07.280368 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.280596 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 14 13:35:07.281582 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 13:35:07.286733 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 13:35:07.287042 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 13:35:07.287302 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.287523 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 14 13:35:07.287740 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 13:35:07.288010 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 13:35:07.288247 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 13:35:07.288502 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.288722 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 14 13:35:07.289066 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 13:35:07.289279 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 13:35:07.289516 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 13:35:07.289763 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.290031 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 14 13:35:07.290254 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 13:35:07.290492 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 13:35:07.290728 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 13:35:07.292013 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.292262 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 14 13:35:07.292475 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 13:35:07.292697 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 13:35:07.295069 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 13:35:07.295334 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.295560 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 14 13:35:07.297559 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 13:35:07.298279 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 13:35:07.298526 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 13:35:07.298762 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 13:35:07.299029 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 14 13:35:07.299269 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 13:35:07.299483 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 13:35:07.299700 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 13:35:07.301134 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 13:35:07.301364 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 13:35:07.301580 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 14 13:35:07.301852 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 14 13:35:07.302071 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 14 13:35:07.302352 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 13:35:07.302576 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 14 13:35:07.302891 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 14 13:35:07.303114 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 14 13:35:07.303359 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 13:35:07.303598 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 13:35:07.303873 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 13:35:07.304093 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 14 13:35:07.304306 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 14 13:35:07.304539 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 13:35:07.304807 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 13:35:07.305059 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 14 13:35:07.305277 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 14 13:35:07.305492 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 13:35:07.305708 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 13:35:07.305962 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 13:35:07.306196 kernel: pci_bus 0000:02: extended config space not accessible Jan 14 13:35:07.306439 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 14 13:35:07.306664 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 14 13:35:07.306936 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 13:35:07.307183 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 13:35:07.307439 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 14 13:35:07.307683 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 13:35:07.308050 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 13:35:07.308284 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 14 13:35:07.308503 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 13:35:07.308748 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 13:35:07.309005 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 13:35:07.309237 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 13:35:07.309455 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 13:35:07.309677 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 13:35:07.309705 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 13:35:07.309720 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 13:35:07.309733 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 13:35:07.309780 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 13:35:07.309798 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 13:35:07.309821 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 13:35:07.309836 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 13:35:07.309856 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 13:35:07.309870 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 13:35:07.309883 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 13:35:07.309896 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 13:35:07.309909 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 13:35:07.309922 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 13:35:07.309936 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 13:35:07.309953 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 13:35:07.309967 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 13:35:07.309980 kernel: iommu: Default domain type: Translated Jan 14 13:35:07.309993 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 13:35:07.310006 kernel: PCI: Using ACPI for IRQ routing Jan 14 13:35:07.310019 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 13:35:07.310033 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 13:35:07.310046 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 14 13:35:07.310278 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 13:35:07.310507 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 13:35:07.310720 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 13:35:07.310750 kernel: vgaarb: loaded Jan 14 13:35:07.310763 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 13:35:07.310811 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 13:35:07.310832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 13:35:07.310845 kernel: pnp: PnP ACPI init Jan 14 13:35:07.311118 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 13:35:07.311141 kernel: pnp: PnP ACPI: found 5 devices Jan 14 13:35:07.311155 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 13:35:07.311181 kernel: NET: Registered PF_INET protocol family Jan 14 13:35:07.311194 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 13:35:07.311214 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 13:35:07.311239 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 13:35:07.311252 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 13:35:07.311264 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 13:35:07.311277 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 13:35:07.311302 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 13:35:07.311314 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 13:35:07.311339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 13:35:07.311351 kernel: NET: Registered PF_XDP protocol family Jan 14 13:35:07.311573 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 14 13:35:07.311825 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 13:35:07.312040 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 13:35:07.312306 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 13:35:07.312552 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 13:35:07.312801 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 13:35:07.313017 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 13:35:07.313266 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 13:35:07.313488 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 13:35:07.313703 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 13:35:07.313989 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 13:35:07.314210 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 13:35:07.314451 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 13:35:07.314664 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 13:35:07.314905 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 13:35:07.315117 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 13:35:07.315349 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 13:35:07.315610 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 13:35:07.315858 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 13:35:07.316083 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 13:35:07.316297 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 13:35:07.316520 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 13:35:07.316731 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 13:35:07.316974 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 13:35:07.317199 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 13:35:07.317448 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 13:35:07.317654 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 13:35:07.317934 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 13:35:07.318161 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 13:35:07.318412 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 13:35:07.318651 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 13:35:07.318905 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 13:35:07.319119 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 13:35:07.319367 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 13:35:07.319615 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 13:35:07.319893 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 13:35:07.320130 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 13:35:07.320357 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 13:35:07.320589 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 13:35:07.320827 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 13:35:07.321060 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 13:35:07.321275 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 13:35:07.321498 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 13:35:07.321718 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 13:35:07.321984 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 13:35:07.322198 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 13:35:07.322410 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 13:35:07.322647 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 13:35:07.322898 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 13:35:07.323111 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 13:35:07.323320 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 13:35:07.323534 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 13:35:07.323738 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 13:35:07.323983 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 14 13:35:07.324187 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 13:35:07.324392 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 14 13:35:07.324606 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 13:35:07.324865 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 14 13:35:07.325079 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 13:35:07.325307 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 14 13:35:07.325560 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 14 13:35:07.325740 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 14 13:35:07.326003 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 13:35:07.326237 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 14 13:35:07.326440 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 14 13:35:07.326672 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 13:35:07.326928 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 14 13:35:07.327162 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 14 13:35:07.327416 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 13:35:07.327655 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 14 13:35:07.327913 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 14 13:35:07.328147 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 13:35:07.328373 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 14 13:35:07.328596 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 14 13:35:07.328849 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 13:35:07.329083 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 14 13:35:07.329311 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 14 13:35:07.329548 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 13:35:07.329824 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 14 13:35:07.330025 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 14 13:35:07.330246 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 13:35:07.330278 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 13:35:07.330291 kernel: PCI: CLS 0 bytes, default 64 Jan 14 13:35:07.330311 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 13:35:07.330337 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 14 13:35:07.330350 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 13:35:07.330363 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 13:35:07.330382 kernel: Initialise system trusted keyrings Jan 14 13:35:07.330444 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 13:35:07.330458 kernel: Key type asymmetric registered Jan 14 13:35:07.330491 kernel: Asymmetric key parser 'x509' registered Jan 14 13:35:07.330505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 13:35:07.330519 kernel: io scheduler mq-deadline registered Jan 14 13:35:07.330533 kernel: io scheduler kyber registered Jan 14 13:35:07.330546 kernel: io scheduler bfq registered Jan 14 13:35:07.330828 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 14 13:35:07.331047 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 14 13:35:07.331316 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.331529 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 14 13:35:07.331802 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 14 13:35:07.332021 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.332262 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 14 13:35:07.332500 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 14 13:35:07.332711 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.332976 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 14 13:35:07.333191 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 14 13:35:07.333404 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.333624 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 14 13:35:07.333881 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 14 13:35:07.334096 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.334308 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 14 13:35:07.334520 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 14 13:35:07.334742 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.334984 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 14 13:35:07.335196 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 14 13:35:07.335409 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.335622 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 14 13:35:07.335887 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 14 13:35:07.336113 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 13:35:07.336133 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 13:35:07.336148 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 13:35:07.336162 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 13:35:07.336176 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 13:35:07.336209 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 13:35:07.336223 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 13:35:07.336237 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 13:35:07.336251 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 13:35:07.336265 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 13:35:07.336480 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 13:35:07.336699 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 13:35:07.336945 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T13:35:05 UTC (1768397705) Jan 14 13:35:07.337150 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 14 13:35:07.337177 kernel: intel_pstate: CPU model not supported Jan 14 13:35:07.337192 kernel: NET: Registered PF_INET6 protocol family Jan 14 13:35:07.337206 kernel: Segment Routing with IPv6 Jan 14 13:35:07.337220 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 13:35:07.337238 kernel: NET: Registered PF_PACKET protocol family Jan 14 13:35:07.337252 kernel: Key type dns_resolver registered Jan 14 13:35:07.337265 kernel: IPI shorthand broadcast: enabled Jan 14 13:35:07.337279 kernel: sched_clock: Marking stable (2186039523, 221480775)->(2527988994, -120468696) Jan 14 13:35:07.337293 kernel: registered taskstats version 1 Jan 14 13:35:07.337307 kernel: Loading compiled-in X.509 certificates Jan 14 13:35:07.337320 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: e8d0aa6f955c6f54d5fb15cad90d0ea8c698688e' Jan 14 13:35:07.337334 kernel: Demotion targets for Node 0: null Jan 14 13:35:07.337353 kernel: Key type .fscrypt registered Jan 14 13:35:07.337366 kernel: Key type fscrypt-provisioning registered Jan 14 13:35:07.337380 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 13:35:07.337394 kernel: ima: Allocated hash algorithm: sha1 Jan 14 13:35:07.337408 kernel: ima: No architecture policies found Jan 14 13:35:07.337421 kernel: clk: Disabling unused clocks Jan 14 13:35:07.337435 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 13:35:07.337453 kernel: Write protecting the kernel read-only data: 47104k Jan 14 13:35:07.337467 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 13:35:07.337481 kernel: Run /init as init process Jan 14 13:35:07.337494 kernel: with arguments: Jan 14 13:35:07.337508 kernel: /init Jan 14 13:35:07.337522 kernel: with environment: Jan 14 13:35:07.337535 kernel: HOME=/ Jan 14 13:35:07.337553 kernel: TERM=linux Jan 14 13:35:07.337567 kernel: ACPI: bus type USB registered Jan 14 13:35:07.337581 kernel: usbcore: registered new interface driver usbfs Jan 14 13:35:07.337595 kernel: usbcore: registered new interface driver hub Jan 14 13:35:07.337608 kernel: usbcore: registered new device driver usb Jan 14 13:35:07.337877 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 13:35:07.338105 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 14 13:35:07.338348 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 13:35:07.338582 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 13:35:07.338828 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 14 13:35:07.339059 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 14 13:35:07.339327 kernel: hub 1-0:1.0: USB hub found Jan 14 13:35:07.339568 kernel: hub 1-0:1.0: 4 ports detected Jan 14 13:35:07.339873 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 13:35:07.340147 kernel: hub 2-0:1.0: USB hub found Jan 14 13:35:07.340408 kernel: hub 2-0:1.0: 4 ports detected Jan 14 13:35:07.340442 kernel: SCSI subsystem initialized Jan 14 13:35:07.340456 kernel: libata version 3.00 loaded. Jan 14 13:35:07.340688 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 13:35:07.340710 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 13:35:07.340947 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 13:35:07.341160 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 13:35:07.341393 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 13:35:07.341651 kernel: scsi host0: ahci Jan 14 13:35:07.341936 kernel: scsi host1: ahci Jan 14 13:35:07.342185 kernel: scsi host2: ahci Jan 14 13:35:07.342432 kernel: scsi host3: ahci Jan 14 13:35:07.342686 kernel: scsi host4: ahci Jan 14 13:35:07.342977 kernel: scsi host5: ahci Jan 14 13:35:07.343007 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 14 13:35:07.343022 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 14 13:35:07.343048 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 14 13:35:07.343061 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 14 13:35:07.343078 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 14 13:35:07.343092 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 14 13:35:07.343329 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 13:35:07.343356 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 13:35:07.343368 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343393 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343405 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343417 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343428 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343440 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 13:35:07.343668 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 14 13:35:07.343688 kernel: usbcore: registered new interface driver usbhid Jan 14 13:35:07.343949 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 14 13:35:07.343970 kernel: usbhid: USB HID core driver Jan 14 13:35:07.343984 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 13:35:07.343998 kernel: GPT:25804799 != 125829119 Jan 14 13:35:07.344018 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 13:35:07.344043 kernel: GPT:25804799 != 125829119 Jan 14 13:35:07.344055 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 13:35:07.344067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 13:35:07.344080 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 14 13:35:07.344353 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 14 13:35:07.344378 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 13:35:07.344390 kernel: device-mapper: uevent: version 1.0.3 Jan 14 13:35:07.344402 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 13:35:07.344414 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 13:35:07.344426 kernel: raid6: sse2x4 gen() 13612 MB/s Jan 14 13:35:07.344438 kernel: raid6: sse2x2 gen() 9266 MB/s Jan 14 13:35:07.344451 kernel: raid6: sse2x1 gen() 9672 MB/s Jan 14 13:35:07.344466 kernel: raid6: using algorithm sse2x4 gen() 13612 MB/s Jan 14 13:35:07.344479 kernel: raid6: .... xor() 8108 MB/s, rmw enabled Jan 14 13:35:07.344491 kernel: raid6: using ssse3x2 recovery algorithm Jan 14 13:35:07.344515 kernel: xor: automatically using best checksumming function avx Jan 14 13:35:07.344527 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 13:35:07.344540 kernel: BTRFS: device fsid a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (194) Jan 14 13:35:07.344552 kernel: BTRFS info (device dm-0): first mount of filesystem a2d7d9b8-1cc4-4aa6-91f7-011fd4658df9 Jan 14 13:35:07.344581 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:35:07.344594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 13:35:07.344607 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 13:35:07.344632 kernel: loop: module loaded Jan 14 13:35:07.344645 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 13:35:07.344657 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 13:35:07.344672 systemd[1]: Successfully made /usr/ read-only. Jan 14 13:35:07.344707 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:35:07.344722 systemd[1]: Detected virtualization kvm. Jan 14 13:35:07.344735 systemd[1]: Detected architecture x86-64. Jan 14 13:35:07.344762 systemd[1]: Running in initrd. Jan 14 13:35:07.344803 systemd[1]: No hostname configured, using default hostname. Jan 14 13:35:07.344825 systemd[1]: Hostname set to . Jan 14 13:35:07.344840 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:35:07.344854 systemd[1]: Queued start job for default target initrd.target. Jan 14 13:35:07.344869 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:35:07.344883 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:07.344898 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:07.344913 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 13:35:07.344932 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:35:07.344948 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 13:35:07.344963 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 13:35:07.344977 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:07.344992 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:07.345006 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:35:07.345025 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:35:07.345040 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:35:07.345073 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:35:07.345087 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:35:07.345101 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:07.345115 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:07.345129 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:35:07.345148 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 13:35:07.345175 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 13:35:07.345188 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:07.345207 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:07.345220 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:07.345233 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:35:07.345250 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 13:35:07.345263 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 13:35:07.345276 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:35:07.345289 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 13:35:07.345321 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 13:35:07.345335 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 13:35:07.345348 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:35:07.345378 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:35:07.345392 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:07.345405 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:07.345418 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:07.345435 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 13:35:07.345502 systemd-journald[329]: Collecting audit messages is enabled. Jan 14 13:35:07.345538 kernel: audit: type=1130 audit(1768397707.254:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.345557 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:35:07.345571 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 13:35:07.345584 kernel: Bridge firewalling registered Jan 14 13:35:07.345597 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:07.345611 systemd-journald[329]: Journal started Jan 14 13:35:07.345652 systemd-journald[329]: Runtime Journal (/run/log/journal/bdf5aa844bce4af785f5e8f86417499a) is 4.7M, max 37.7M, 33M free. Jan 14 13:35:07.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.299831 systemd-modules-load[332]: Inserted module 'br_netfilter' Jan 14 13:35:07.376847 kernel: audit: type=1130 audit(1768397707.374:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.376894 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:35:07.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.386813 kernel: audit: type=1130 audit(1768397707.381:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.386956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:07.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.388645 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:07.400422 kernel: audit: type=1130 audit(1768397707.386:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.400454 kernel: audit: type=1130 audit(1768397707.393:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.400944 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 13:35:07.402996 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:35:07.407981 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:35:07.411946 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:35:07.431495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:07.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.440481 kernel: audit: type=1130 audit(1768397707.432:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.441990 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:35:07.439000 audit: BPF prog-id=6 op=LOAD Jan 14 13:35:07.446796 kernel: audit: type=1334 audit(1768397707.439:8): prog-id=6 op=LOAD Jan 14 13:35:07.447827 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:07.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.454796 systemd-tmpfiles[351]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 13:35:07.457805 kernel: audit: type=1130 audit(1768397707.451:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.462024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 13:35:07.473619 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:07.482909 kernel: audit: type=1130 audit(1768397707.476:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.481923 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:07.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.487799 dracut-cmdline[367]: dracut-109 Jan 14 13:35:07.492789 dracut-cmdline[367]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=166c426371167f765dd2026937f2932948c99d0fb4a3868a9b09e1eb4ef3a9c9 Jan 14 13:35:07.541037 systemd-resolved[365]: Positive Trust Anchors: Jan 14 13:35:07.541061 systemd-resolved[365]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:35:07.541068 systemd-resolved[365]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:35:07.541111 systemd-resolved[365]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:35:07.577501 systemd-resolved[365]: Defaulting to hostname 'linux'. Jan 14 13:35:07.579952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:35:07.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.582045 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:07.644813 kernel: Loading iSCSI transport class v2.0-870. Jan 14 13:35:07.662787 kernel: iscsi: registered transport (tcp) Jan 14 13:35:07.690802 kernel: iscsi: registered transport (qla4xxx) Jan 14 13:35:07.690896 kernel: QLogic iSCSI HBA Driver Jan 14 13:35:07.727102 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:35:07.758632 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:35:07.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.761715 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:35:07.826468 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:07.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.830291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 13:35:07.832924 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 13:35:07.875463 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:07.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.879000 audit: BPF prog-id=7 op=LOAD Jan 14 13:35:07.879000 audit: BPF prog-id=8 op=LOAD Jan 14 13:35:07.881630 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:07.916436 systemd-udevd[600]: Using default interface naming scheme 'v257'. Jan 14 13:35:07.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.933500 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:07.940124 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 13:35:07.970836 dracut-pre-trigger[667]: rd.md=0: removing MD RAID activation Jan 14 13:35:07.987671 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:07.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:07.989000 audit: BPF prog-id=9 op=LOAD Jan 14 13:35:07.992950 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:35:08.017224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:08.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.021408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:35:08.054822 systemd-networkd[725]: lo: Link UP Jan 14 13:35:08.054834 systemd-networkd[725]: lo: Gained carrier Jan 14 13:35:08.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.057560 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:35:08.058365 systemd[1]: Reached target network.target - Network. Jan 14 13:35:08.177103 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:08.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.180957 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 13:35:08.290820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 13:35:08.308283 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 13:35:08.325399 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:35:08.340584 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 13:35:08.369118 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 13:35:08.389066 disk-uuid[770]: Primary Header is updated. Jan 14 13:35:08.389066 disk-uuid[770]: Secondary Entries is updated. Jan 14 13:35:08.389066 disk-uuid[770]: Secondary Header is updated. Jan 14 13:35:08.449922 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 13:35:08.486141 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:08.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.487957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:08.496103 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 13:35:08.496139 kernel: audit: type=1131 audit(1768397708.487:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.494851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:08.512076 kernel: AES CTR mode by8 optimization enabled Jan 14 13:35:08.512205 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 13:35:08.508416 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:08.601691 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:35:08.601707 systemd-networkd[725]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:08.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.605770 systemd-networkd[725]: eth0: Link UP Jan 14 13:35:08.652360 kernel: audit: type=1130 audit(1768397708.646:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.609127 systemd-networkd[725]: eth0: Gained carrier Jan 14 13:35:08.609144 systemd-networkd[725]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:35:08.624837 systemd-networkd[725]: eth0: DHCPv4 address 10.230.49.6/30, gateway 10.230.49.5 acquired from 10.230.49.5 Jan 14 13:35:08.645584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:08.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.706605 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:08.713152 kernel: audit: type=1130 audit(1768397708.706:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.708468 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:08.713839 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:08.715513 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:35:08.718458 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 13:35:08.749114 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:08.755509 kernel: audit: type=1130 audit(1768397708.749:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:08.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.439900 disk-uuid[771]: Warning: The kernel is still using the old partition table. Jan 14 13:35:09.439900 disk-uuid[771]: The new table will be used at the next reboot or after you Jan 14 13:35:09.439900 disk-uuid[771]: run partprobe(8) or kpartx(8) Jan 14 13:35:09.439900 disk-uuid[771]: The operation has completed successfully. Jan 14 13:35:09.447079 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 13:35:09.447302 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 13:35:09.458443 kernel: audit: type=1130 audit(1768397709.447:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.458492 kernel: audit: type=1131 audit(1768397709.448:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.450955 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 13:35:09.493781 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Jan 14 13:35:09.504646 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:35:09.504885 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:35:09.510347 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:35:09.510395 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:35:09.518799 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:35:09.519774 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 13:35:09.526104 kernel: audit: type=1130 audit(1768397709.519:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.523934 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 13:35:09.740742 ignition[873]: Ignition 2.24.0 Jan 14 13:35:09.740785 ignition[873]: Stage: fetch-offline Jan 14 13:35:09.743316 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:09.750260 kernel: audit: type=1130 audit(1768397709.743:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.740865 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:09.747927 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 13:35:09.740887 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:09.741055 ignition[873]: parsed url from cmdline: "" Jan 14 13:35:09.741062 ignition[873]: no config URL provided Jan 14 13:35:09.741203 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:09.741223 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:09.741241 ignition[873]: failed to fetch config: resource requires networking Jan 14 13:35:09.741495 ignition[873]: Ignition finished successfully Jan 14 13:35:09.779673 ignition[881]: Ignition 2.24.0 Jan 14 13:35:09.779692 ignition[881]: Stage: fetch Jan 14 13:35:09.780981 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:09.781010 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:09.781172 ignition[881]: parsed url from cmdline: "" Jan 14 13:35:09.781178 ignition[881]: no config URL provided Jan 14 13:35:09.781194 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 13:35:09.781207 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 14 13:35:09.781987 ignition[881]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 13:35:09.782016 ignition[881]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 13:35:09.782239 ignition[881]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 13:35:09.797792 ignition[881]: GET result: OK Jan 14 13:35:09.798252 ignition[881]: parsing config with SHA512: 3d337aaf8c5f6a9db9bf6da6ab30602d8c54437d01ea051aa99a7b5b5011c919b40cba054678e54a248d6147fabdc531fb2e029a59a0afce6a9f7fdd3716d831 Jan 14 13:35:09.808133 unknown[881]: fetched base config from "system" Jan 14 13:35:09.808152 unknown[881]: fetched base config from "system" Jan 14 13:35:09.808544 ignition[881]: fetch: fetch complete Jan 14 13:35:09.808161 unknown[881]: fetched user config from "openstack" Jan 14 13:35:09.808552 ignition[881]: fetch: fetch passed Jan 14 13:35:09.818094 kernel: audit: type=1130 audit(1768397709.812:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.811815 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 13:35:09.808618 ignition[881]: Ignition finished successfully Jan 14 13:35:09.815924 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 13:35:09.849312 ignition[887]: Ignition 2.24.0 Jan 14 13:35:09.849332 ignition[887]: Stage: kargs Jan 14 13:35:09.849589 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:09.849606 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:09.852938 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 13:35:09.859901 kernel: audit: type=1130 audit(1768397709.853:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.850946 ignition[887]: kargs: kargs passed Jan 14 13:35:09.857923 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 13:35:09.851017 ignition[887]: Ignition finished successfully Jan 14 13:35:09.887913 ignition[893]: Ignition 2.24.0 Jan 14 13:35:09.887937 ignition[893]: Stage: disks Jan 14 13:35:09.888167 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:09.888184 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:09.889406 ignition[893]: disks: disks passed Jan 14 13:35:09.891187 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 13:35:09.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.889473 ignition[893]: Ignition finished successfully Jan 14 13:35:09.893057 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:09.893803 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 13:35:09.895120 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:35:09.896553 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:35:09.898082 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:35:09.900933 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 13:35:09.944107 systemd-fsck[901]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 13:35:09.948277 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 13:35:09.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:09.951153 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 13:35:10.089793 kernel: EXT4-fs (vda9): mounted filesystem 00eaf6ed-0a89-4fef-afb6-3b81d372e1c1 r/w with ordered data mode. Quota mode: none. Jan 14 13:35:10.091588 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 13:35:10.093596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 13:35:10.097162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:10.100856 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 13:35:10.102865 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 13:35:10.111950 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 13:35:10.114321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 13:35:10.114386 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:10.119088 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 13:35:10.123787 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (909) Jan 14 13:35:10.124030 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 13:35:10.125941 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:35:10.125969 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:35:10.148710 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:35:10.148827 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:35:10.154247 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:10.214780 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:10.361715 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:10.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:10.365358 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 13:35:10.368004 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 13:35:10.389785 kernel: BTRFS info (device vda6): last unmount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:35:10.412443 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 13:35:10.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:10.420622 ignition[1013]: INFO : Ignition 2.24.0 Jan 14 13:35:10.420622 ignition[1013]: INFO : Stage: mount Jan 14 13:35:10.422368 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:10.422368 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:10.422368 ignition[1013]: INFO : mount: mount passed Jan 14 13:35:10.422368 ignition[1013]: INFO : Ignition finished successfully Jan 14 13:35:10.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:10.423715 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 13:35:10.480638 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 13:35:10.528182 systemd-networkd[725]: eth0: Gained IPv6LL Jan 14 13:35:11.249806 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:12.034680 systemd-networkd[725]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c41:24:19ff:fee6:3106/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c41:24:19ff:fee6:3106/64 assigned by NDisc. Jan 14 13:35:12.034694 systemd-networkd[725]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 13:35:13.261790 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:17.276863 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:17.284697 coreos-metadata[911]: Jan 14 13:35:17.284 WARN failed to locate config-drive, using the metadata service API instead Jan 14 13:35:17.309654 coreos-metadata[911]: Jan 14 13:35:17.309 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 13:35:17.324195 coreos-metadata[911]: Jan 14 13:35:17.324 INFO Fetch successful Jan 14 13:35:17.325129 coreos-metadata[911]: Jan 14 13:35:17.324 INFO wrote hostname srv-414dr.gb1.brightbox.com to /sysroot/etc/hostname Jan 14 13:35:17.327626 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 13:35:17.341700 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 13:35:17.341742 kernel: audit: type=1130 audit(1768397717.329:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:17.341782 kernel: audit: type=1131 audit(1768397717.329:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:17.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:17.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:17.327837 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 13:35:17.332871 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 13:35:17.363961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 13:35:17.389802 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1028) Jan 14 13:35:17.395504 kernel: BTRFS info (device vda6): first mount of filesystem bc594bac-1fbf-41b0-97ef-4b225e86c0fe Jan 14 13:35:17.395555 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 13:35:17.413683 kernel: BTRFS info (device vda6): turning on async discard Jan 14 13:35:17.413772 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 13:35:17.417416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 13:35:17.452879 ignition[1046]: INFO : Ignition 2.24.0 Jan 14 13:35:17.452879 ignition[1046]: INFO : Stage: files Jan 14 13:35:17.454594 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:17.454594 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:17.454594 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Jan 14 13:35:17.457503 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 13:35:17.457503 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 13:35:17.460498 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 13:35:17.461469 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 13:35:17.461469 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 13:35:17.461209 unknown[1046]: wrote ssh authorized keys file for user: core Jan 14 13:35:17.468559 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 13:35:17.468559 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 13:35:17.617338 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 13:35:17.877177 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 13:35:17.877177 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 13:35:17.879908 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:35:17.890967 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:35:17.890967 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:35:17.890967 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 13:35:18.291353 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 13:35:19.435357 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 13:35:19.435357 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 13:35:19.450887 ignition[1046]: INFO : files: files passed Jan 14 13:35:19.450887 ignition[1046]: INFO : Ignition finished successfully Jan 14 13:35:19.469689 kernel: audit: type=1130 audit(1768397719.451:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.450019 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 13:35:19.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.476764 kernel: audit: type=1130 audit(1768397719.470:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.454589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 13:35:19.463831 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 13:35:19.469695 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 13:35:19.469917 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 13:35:19.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.486067 kernel: audit: type=1131 audit(1768397719.470:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.496187 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:19.496187 initrd-setup-root-after-ignition[1077]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:19.499743 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 13:35:19.500905 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:19.508068 kernel: audit: type=1130 audit(1768397719.501:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.502310 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 13:35:19.510051 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 13:35:19.564592 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 13:35:19.564788 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 13:35:19.576401 kernel: audit: type=1130 audit(1768397719.565:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.576449 kernel: audit: type=1131 audit(1768397719.565:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.566533 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 13:35:19.577087 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 13:35:19.578836 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 13:35:19.580235 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 13:35:19.624281 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:19.636242 kernel: audit: type=1130 audit(1768397719.630:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.634024 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 13:35:19.661288 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 13:35:19.661520 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:19.663361 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:19.665176 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 13:35:19.666768 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 13:35:19.673774 kernel: audit: type=1131 audit(1768397719.667:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.666948 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 13:35:19.673641 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 13:35:19.674471 systemd[1]: Stopped target basic.target - Basic System. Jan 14 13:35:19.675958 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 13:35:19.677427 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 13:35:19.678810 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 13:35:19.680340 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 13:35:19.681862 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 13:35:19.683325 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 13:35:19.684889 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 13:35:19.686367 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 13:35:19.687957 systemd[1]: Stopped target swap.target - Swaps. Jan 14 13:35:19.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.689285 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 13:35:19.689548 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 13:35:19.690992 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:19.691880 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:19.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.693268 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 13:35:19.693472 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:19.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.694732 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 13:35:19.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.694933 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 13:35:19.696806 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 13:35:19.697065 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 13:35:19.698819 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 13:35:19.698988 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 13:35:19.702838 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 13:35:19.706035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 13:35:19.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.706704 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 13:35:19.707851 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:19.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.710004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 13:35:19.710891 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:19.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.712011 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 13:35:19.712884 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 13:35:19.724801 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 13:35:19.724948 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 13:35:19.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.741768 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 13:35:19.752844 ignition[1101]: INFO : Ignition 2.24.0 Jan 14 13:35:19.752844 ignition[1101]: INFO : Stage: umount Jan 14 13:35:19.754627 ignition[1101]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 13:35:19.754627 ignition[1101]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 13:35:19.757094 ignition[1101]: INFO : umount: umount passed Jan 14 13:35:19.757094 ignition[1101]: INFO : Ignition finished successfully Jan 14 13:35:19.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.757343 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 13:35:19.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.757578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 13:35:19.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.758981 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 13:35:19.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.759079 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 13:35:19.760072 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 13:35:19.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.760135 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 13:35:19.761400 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 13:35:19.761515 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 13:35:19.762813 systemd[1]: Stopped target network.target - Network. Jan 14 13:35:19.764216 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 13:35:19.764297 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 13:35:19.765612 systemd[1]: Stopped target paths.target - Path Units. Jan 14 13:35:19.766965 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 13:35:19.773225 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:19.774920 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 13:35:19.775585 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 13:35:19.776956 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 13:35:19.777036 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 13:35:19.778273 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 13:35:19.778326 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 13:35:19.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.779558 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 13:35:19.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.779603 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:35:19.781106 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 13:35:19.781217 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 13:35:19.782361 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 13:35:19.782498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 13:35:19.783890 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 13:35:19.785997 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 13:35:19.795843 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 13:35:19.796050 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 13:35:19.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.800049 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 13:35:19.800248 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 13:35:19.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.802000 audit: BPF prog-id=9 op=UNLOAD Jan 14 13:35:19.804211 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 13:35:19.804000 audit: BPF prog-id=6 op=UNLOAD Jan 14 13:35:19.805608 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 13:35:19.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.805692 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:19.815344 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 13:35:19.816160 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 13:35:19.816277 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 13:35:19.817082 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 13:35:19.817165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:19.817878 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 13:35:19.817960 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:19.819892 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:19.833794 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 13:35:19.835156 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:19.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.838573 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 13:35:19.838708 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:19.840611 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 13:35:19.840667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:19.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.841995 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 13:35:19.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.842071 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 13:35:19.844996 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 13:35:19.845075 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 13:35:19.846286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 13:35:19.846361 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 13:35:19.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.856975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 13:35:19.859078 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 13:35:19.859169 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:35:19.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.859961 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 13:35:19.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.860040 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:19.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.862352 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 13:35:19.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.862426 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:19.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.864568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 13:35:19.864634 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:19.866152 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 13:35:19.866235 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:19.868330 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 13:35:19.868511 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 13:35:19.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.878984 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 13:35:19.879156 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 13:35:19.883420 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 13:35:19.885842 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 13:35:19.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.887527 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 13:35:19.888524 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 13:35:19.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:19.888610 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 13:35:19.891071 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 13:35:19.916182 systemd[1]: Switching root. Jan 14 13:35:19.944013 systemd-journald[329]: Journal stopped Jan 14 13:35:21.521705 systemd-journald[329]: Received SIGTERM from PID 1 (systemd). Jan 14 13:35:21.521844 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 13:35:21.521892 kernel: SELinux: policy capability open_perms=1 Jan 14 13:35:21.521936 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 13:35:21.521955 kernel: SELinux: policy capability always_check_network=0 Jan 14 13:35:21.521994 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 13:35:21.522013 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 13:35:21.522037 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 13:35:21.522063 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 13:35:21.522094 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 13:35:21.522120 systemd[1]: Successfully loaded SELinux policy in 79.599ms. Jan 14 13:35:21.522168 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.374ms. Jan 14 13:35:21.522202 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 13:35:21.522225 systemd[1]: Detected virtualization kvm. Jan 14 13:35:21.522246 systemd[1]: Detected architecture x86-64. Jan 14 13:35:21.522274 systemd[1]: Detected first boot. Jan 14 13:35:21.522307 systemd[1]: Hostname set to . Jan 14 13:35:21.522327 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 13:35:21.522365 zram_generator::config[1144]: No configuration found. Jan 14 13:35:21.522406 kernel: Guest personality initialized and is inactive Jan 14 13:35:21.522453 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 13:35:21.522479 kernel: Initialized host personality Jan 14 13:35:21.522500 kernel: NET: Registered PF_VSOCK protocol family Jan 14 13:35:21.522520 systemd[1]: Populated /etc with preset unit settings. Jan 14 13:35:21.522541 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 13:35:21.522573 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 13:35:21.522595 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 13:35:21.522621 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 13:35:21.522644 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 13:35:21.522664 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 13:35:21.522685 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 13:35:21.522706 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 13:35:21.522738 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 13:35:21.523808 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 13:35:21.523846 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 13:35:21.523866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 13:35:21.523899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 13:35:21.523920 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 13:35:21.523960 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 13:35:21.523991 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 13:35:21.524034 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 13:35:21.524053 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 13:35:21.524085 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 13:35:21.524104 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 13:35:21.524133 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 13:35:21.524153 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 13:35:21.524174 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 13:35:21.524193 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 13:35:21.524212 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 13:35:21.524231 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 13:35:21.524250 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 13:35:21.524281 systemd[1]: Reached target slices.target - Slice Units. Jan 14 13:35:21.524302 systemd[1]: Reached target swap.target - Swaps. Jan 14 13:35:21.524321 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 13:35:21.524348 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 13:35:21.524367 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 13:35:21.524400 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 13:35:21.524435 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 13:35:21.524469 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 13:35:21.524490 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 13:35:21.524510 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 13:35:21.524531 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 13:35:21.524550 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 13:35:21.524569 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 13:35:21.524590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 13:35:21.524624 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 13:35:21.524646 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 13:35:21.524667 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:21.524699 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 13:35:21.524718 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 13:35:21.524757 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 13:35:21.524806 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 13:35:21.524837 systemd[1]: Reached target machines.target - Containers. Jan 14 13:35:21.524858 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 13:35:21.524877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:21.524897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 13:35:21.524928 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 13:35:21.524947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:21.524986 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:35:21.525016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:21.525057 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 13:35:21.525077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:21.525097 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 13:35:21.525117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 13:35:21.525138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 13:35:21.525168 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 13:35:21.525191 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 13:35:21.525220 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:35:21.525247 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 13:35:21.525280 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 13:35:21.525312 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 13:35:21.525345 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 13:35:21.525375 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 13:35:21.525408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 13:35:21.525450 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:21.525471 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 13:35:21.525504 kernel: fuse: init (API version 7.41) Jan 14 13:35:21.525526 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 13:35:21.525546 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 13:35:21.525577 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 13:35:21.525611 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 13:35:21.525633 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 13:35:21.525660 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 13:35:21.525681 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 13:35:21.525714 kernel: ACPI: bus type drm_connector registered Jan 14 13:35:21.525743 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 13:35:21.527931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:21.527972 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:21.527995 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:35:21.528019 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:35:21.528039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:21.528060 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:21.528081 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 13:35:21.528116 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 13:35:21.528148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:21.528168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:21.528200 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 13:35:21.528219 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 13:35:21.528261 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 13:35:21.528282 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 13:35:21.528313 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 13:35:21.528351 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 13:35:21.528385 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 13:35:21.528423 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 13:35:21.528466 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:21.528489 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:35:21.528551 systemd-journald[1232]: Collecting audit messages is enabled. Jan 14 13:35:21.528601 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 13:35:21.528625 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:35:21.528647 systemd-journald[1232]: Journal started Jan 14 13:35:21.528683 systemd-journald[1232]: Runtime Journal (/run/log/journal/bdf5aa844bce4af785f5e8f86417499a) is 4.7M, max 37.7M, 33M free. Jan 14 13:35:21.530801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 13:35:21.145000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 13:35:21.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.314000 audit: BPF prog-id=14 op=UNLOAD Jan 14 13:35:21.314000 audit: BPF prog-id=13 op=UNLOAD Jan 14 13:35:21.315000 audit: BPF prog-id=15 op=LOAD Jan 14 13:35:21.317000 audit: BPF prog-id=16 op=LOAD Jan 14 13:35:21.317000 audit: BPF prog-id=17 op=LOAD Jan 14 13:35:21.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.513000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 13:35:21.513000 audit[1232]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffeef058600 a2=4000 a3=0 items=0 ppid=1 pid=1232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:21.513000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 13:35:21.035175 systemd[1]: Queued start job for default target multi-user.target. Jan 14 13:35:21.046032 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 13:35:21.046917 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 13:35:21.537899 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:35:21.541792 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 13:35:21.548910 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 13:35:21.557846 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 13:35:21.565782 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 13:35:21.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.567323 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 13:35:21.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.568704 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 13:35:21.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.589862 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 13:35:21.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.592459 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 13:35:21.593703 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 13:35:21.605118 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 13:35:21.621435 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 13:35:21.621776 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 13:35:21.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.645899 systemd-journald[1232]: Time spent on flushing to /var/log/journal/bdf5aa844bce4af785f5e8f86417499a is 66.016ms for 1299 entries. Jan 14 13:35:21.645899 systemd-journald[1232]: System Journal (/var/log/journal/bdf5aa844bce4af785f5e8f86417499a) is 8M, max 588.1M, 580.1M free. Jan 14 13:35:21.728919 systemd-journald[1232]: Received client request to flush runtime journal. Jan 14 13:35:21.728974 kernel: loop2: detected capacity change from 0 to 50784 Jan 14 13:35:21.729001 kernel: loop3: detected capacity change from 0 to 8 Jan 14 13:35:21.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.659849 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 13:35:21.672606 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 14 13:35:21.672625 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Jan 14 13:35:21.687433 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 13:35:21.703795 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 13:35:21.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.733818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 13:35:21.748182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 13:35:21.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.753874 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 13:35:21.772697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 13:35:21.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.774000 audit: BPF prog-id=18 op=LOAD Jan 14 13:35:21.774000 audit: BPF prog-id=19 op=LOAD Jan 14 13:35:21.774000 audit: BPF prog-id=20 op=LOAD Jan 14 13:35:21.777044 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 13:35:21.779000 audit: BPF prog-id=21 op=LOAD Jan 14 13:35:21.781772 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 13:35:21.781950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 13:35:21.785345 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 13:35:21.795000 audit: BPF prog-id=22 op=LOAD Jan 14 13:35:21.795000 audit: BPF prog-id=23 op=LOAD Jan 14 13:35:21.795000 audit: BPF prog-id=24 op=LOAD Jan 14 13:35:21.800431 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 13:35:21.801000 audit: BPF prog-id=25 op=LOAD Jan 14 13:35:21.801000 audit: BPF prog-id=26 op=LOAD Jan 14 13:35:21.801000 audit: BPF prog-id=27 op=LOAD Jan 14 13:35:21.807631 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 13:35:21.814776 kernel: loop6: detected capacity change from 0 to 50784 Jan 14 13:35:21.834777 kernel: loop7: detected capacity change from 0 to 8 Jan 14 13:35:21.839776 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 13:35:21.852248 (sd-merge)[1306]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 14 13:35:21.854801 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 14 13:35:21.855815 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Jan 14 13:35:21.863315 (sd-merge)[1306]: Merged extensions into '/usr'. Jan 14 13:35:21.867838 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 13:35:21.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:21.875985 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 13:35:21.876024 systemd[1]: Reloading... Jan 14 13:35:21.922659 systemd-nsresourced[1309]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 13:35:21.969826 zram_generator::config[1348]: No configuration found. Jan 14 13:35:22.073658 systemd-resolved[1307]: Positive Trust Anchors: Jan 14 13:35:22.073676 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 13:35:22.073683 systemd-resolved[1307]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 13:35:22.073725 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 13:35:22.099160 systemd-resolved[1307]: Using system hostname 'srv-414dr.gb1.brightbox.com'. Jan 14 13:35:22.138150 systemd-oomd[1305]: No swap; memory pressure usage will be degraded Jan 14 13:35:22.379334 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 13:35:22.380457 systemd[1]: Reloading finished in 503 ms. Jan 14 13:35:22.404312 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 13:35:22.412440 kernel: kauditd_printk_skb: 101 callbacks suppressed Jan 14 13:35:22.412490 kernel: audit: type=1130 audit(1768397722.404:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.410608 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 13:35:22.411793 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 13:35:22.413040 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 13:35:22.414371 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 13:35:22.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.422766 kernel: audit: type=1130 audit(1768397722.410:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.424052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 13:35:22.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.428820 kernel: audit: type=1130 audit(1768397722.412:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.436737 kernel: audit: type=1130 audit(1768397722.413:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.434833 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 13:35:22.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.442316 kernel: audit: type=1130 audit(1768397722.414:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.441086 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 13:35:22.454970 systemd[1]: Starting ensure-sysext.service... Jan 14 13:35:22.459034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 13:35:22.465000 audit: BPF prog-id=28 op=LOAD Jan 14 13:35:22.469837 kernel: audit: type=1334 audit(1768397722.465:153): prog-id=28 op=LOAD Jan 14 13:35:22.467000 audit: BPF prog-id=18 op=UNLOAD Jan 14 13:35:22.467000 audit: BPF prog-id=29 op=LOAD Jan 14 13:35:22.472956 kernel: audit: type=1334 audit(1768397722.467:154): prog-id=18 op=UNLOAD Jan 14 13:35:22.472996 kernel: audit: type=1334 audit(1768397722.467:155): prog-id=29 op=LOAD Jan 14 13:35:22.474268 kernel: audit: type=1334 audit(1768397722.467:156): prog-id=30 op=LOAD Jan 14 13:35:22.467000 audit: BPF prog-id=30 op=LOAD Jan 14 13:35:22.467000 audit: BPF prog-id=19 op=UNLOAD Jan 14 13:35:22.480769 kernel: audit: type=1334 audit(1768397722.467:157): prog-id=19 op=UNLOAD Jan 14 13:35:22.467000 audit: BPF prog-id=20 op=UNLOAD Jan 14 13:35:22.475000 audit: BPF prog-id=31 op=LOAD Jan 14 13:35:22.475000 audit: BPF prog-id=25 op=UNLOAD Jan 14 13:35:22.475000 audit: BPF prog-id=32 op=LOAD Jan 14 13:35:22.475000 audit: BPF prog-id=33 op=LOAD Jan 14 13:35:22.475000 audit: BPF prog-id=26 op=UNLOAD Jan 14 13:35:22.475000 audit: BPF prog-id=27 op=UNLOAD Jan 14 13:35:22.477000 audit: BPF prog-id=34 op=LOAD Jan 14 13:35:22.477000 audit: BPF prog-id=21 op=UNLOAD Jan 14 13:35:22.482000 audit: BPF prog-id=35 op=LOAD Jan 14 13:35:22.482000 audit: BPF prog-id=15 op=UNLOAD Jan 14 13:35:22.483000 audit: BPF prog-id=36 op=LOAD Jan 14 13:35:22.483000 audit: BPF prog-id=37 op=LOAD Jan 14 13:35:22.483000 audit: BPF prog-id=16 op=UNLOAD Jan 14 13:35:22.483000 audit: BPF prog-id=17 op=UNLOAD Jan 14 13:35:22.484000 audit: BPF prog-id=38 op=LOAD Jan 14 13:35:22.484000 audit: BPF prog-id=22 op=UNLOAD Jan 14 13:35:22.484000 audit: BPF prog-id=39 op=LOAD Jan 14 13:35:22.484000 audit: BPF prog-id=40 op=LOAD Jan 14 13:35:22.484000 audit: BPF prog-id=23 op=UNLOAD Jan 14 13:35:22.484000 audit: BPF prog-id=24 op=UNLOAD Jan 14 13:35:22.490521 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 13:35:22.491473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 13:35:22.504015 systemd[1]: Reload requested from client PID 1411 ('systemctl') (unit ensure-sysext.service)... Jan 14 13:35:22.504051 systemd[1]: Reloading... Jan 14 13:35:22.541507 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 13:35:22.541560 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 13:35:22.541994 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 13:35:22.546694 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 14 13:35:22.546821 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 14 13:35:22.582395 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:35:22.582415 systemd-tmpfiles[1412]: Skipping /boot Jan 14 13:35:22.601419 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 13:35:22.601438 systemd-tmpfiles[1412]: Skipping /boot Jan 14 13:35:22.639798 zram_generator::config[1452]: No configuration found. Jan 14 13:35:22.906578 systemd[1]: Reloading finished in 401 ms. Jan 14 13:35:22.921316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 13:35:22.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.923000 audit: BPF prog-id=41 op=LOAD Jan 14 13:35:22.923000 audit: BPF prog-id=35 op=UNLOAD Jan 14 13:35:22.924000 audit: BPF prog-id=42 op=LOAD Jan 14 13:35:22.924000 audit: BPF prog-id=43 op=LOAD Jan 14 13:35:22.924000 audit: BPF prog-id=36 op=UNLOAD Jan 14 13:35:22.924000 audit: BPF prog-id=37 op=UNLOAD Jan 14 13:35:22.925000 audit: BPF prog-id=44 op=LOAD Jan 14 13:35:22.925000 audit: BPF prog-id=31 op=UNLOAD Jan 14 13:35:22.925000 audit: BPF prog-id=45 op=LOAD Jan 14 13:35:22.925000 audit: BPF prog-id=46 op=LOAD Jan 14 13:35:22.925000 audit: BPF prog-id=32 op=UNLOAD Jan 14 13:35:22.925000 audit: BPF prog-id=33 op=UNLOAD Jan 14 13:35:22.926000 audit: BPF prog-id=47 op=LOAD Jan 14 13:35:22.926000 audit: BPF prog-id=38 op=UNLOAD Jan 14 13:35:22.926000 audit: BPF prog-id=48 op=LOAD Jan 14 13:35:22.927000 audit: BPF prog-id=49 op=LOAD Jan 14 13:35:22.927000 audit: BPF prog-id=39 op=UNLOAD Jan 14 13:35:22.927000 audit: BPF prog-id=40 op=UNLOAD Jan 14 13:35:22.928000 audit: BPF prog-id=50 op=LOAD Jan 14 13:35:22.928000 audit: BPF prog-id=28 op=UNLOAD Jan 14 13:35:22.928000 audit: BPF prog-id=51 op=LOAD Jan 14 13:35:22.928000 audit: BPF prog-id=52 op=LOAD Jan 14 13:35:22.928000 audit: BPF prog-id=29 op=UNLOAD Jan 14 13:35:22.928000 audit: BPF prog-id=30 op=UNLOAD Jan 14 13:35:22.930000 audit: BPF prog-id=53 op=LOAD Jan 14 13:35:22.936000 audit: BPF prog-id=34 op=UNLOAD Jan 14 13:35:22.940970 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 13:35:22.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:22.953190 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:35:22.956028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 13:35:22.964531 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 13:35:22.969129 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 13:35:22.970000 audit: BPF prog-id=8 op=UNLOAD Jan 14 13:35:22.970000 audit: BPF prog-id=7 op=UNLOAD Jan 14 13:35:22.971000 audit: BPF prog-id=54 op=LOAD Jan 14 13:35:22.971000 audit: BPF prog-id=55 op=LOAD Jan 14 13:35:22.975194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 13:35:22.979268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 13:35:22.985719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:22.986003 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:22.991910 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 13:35:23.012182 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 13:35:23.018355 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 13:35:23.019277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:23.019597 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:35:23.019772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:35:23.019952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:23.026516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:23.026846 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:23.027106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:23.027354 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:35:23.027511 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:35:23.027634 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:23.029000 audit[1510]: SYSTEM_BOOT pid=1510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.037450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:23.037853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 13:35:23.042271 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 13:35:23.043693 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 13:35:23.044058 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 13:35:23.044276 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 13:35:23.044546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 13:35:23.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.054909 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 13:35:23.064024 systemd[1]: Finished ensure-sysext.service. Jan 14 13:35:23.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.068000 audit: BPF prog-id=56 op=LOAD Jan 14 13:35:23.077384 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 13:35:23.111874 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 13:35:23.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.120684 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 13:35:23.125179 systemd-udevd[1508]: Using default interface naming scheme 'v257'. Jan 14 13:35:23.127963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 13:35:23.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.132184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 13:35:23.132550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 13:35:23.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.134616 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 13:35:23.145177 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 13:35:23.146081 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 13:35:23.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.148694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 13:35:23.149003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 13:35:23.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.151481 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 13:35:23.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:23.172269 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 13:35:23.175273 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 13:35:23.189000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 13:35:23.189000 audit[1545]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffccf4e11f0 a2=420 a3=0 items=0 ppid=1504 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:23.189000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:35:23.191790 augenrules[1545]: No rules Jan 14 13:35:23.192851 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:35:23.193267 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:35:23.194613 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 13:35:23.202814 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 13:35:23.295941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 13:35:23.298086 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 13:35:23.379160 systemd-networkd[1555]: lo: Link UP Jan 14 13:35:23.380222 systemd-networkd[1555]: lo: Gained carrier Jan 14 13:35:23.382822 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 13:35:23.383990 systemd[1]: Reached target network.target - Network. Jan 14 13:35:23.388249 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 13:35:23.393033 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 13:35:23.468668 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 13:35:23.486173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 13:35:23.650780 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 13:35:23.653489 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:35:23.653503 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 13:35:23.657562 systemd-networkd[1555]: eth0: Link UP Jan 14 13:35:23.658513 systemd-networkd[1555]: eth0: Gained carrier Jan 14 13:35:23.658543 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 13:35:23.663782 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 13:35:23.670857 kernel: ACPI: button: Power Button [PWRF] Jan 14 13:35:23.672828 systemd-networkd[1555]: eth0: DHCPv4 address 10.230.49.6/30, gateway 10.230.49.5 acquired from 10.230.49.5 Jan 14 13:35:23.675684 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Jan 14 13:35:23.716613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 13:35:23.723051 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 13:35:23.762853 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 13:35:23.784786 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 13:35:23.789012 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 13:35:23.791368 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 13:35:23.797541 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 13:35:23.803060 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 13:35:23.837909 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 13:35:23.839512 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 13:35:23.841960 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 13:35:23.842904 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 13:35:23.844856 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 13:35:23.846066 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 13:35:23.848035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 13:35:23.849874 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 13:35:23.851874 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 13:35:23.852612 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 13:35:23.853486 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 13:35:23.853615 systemd[1]: Reached target paths.target - Path Units. Jan 14 13:35:23.854834 systemd[1]: Reached target timers.target - Timer Units. Jan 14 13:35:23.858468 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 13:35:23.874313 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 13:35:23.881102 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 13:35:23.883206 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 13:35:23.884782 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 13:35:23.894818 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 13:35:23.896229 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 13:35:23.899595 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 13:35:23.902418 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 13:35:23.904396 systemd[1]: Reached target basic.target - Basic System. Jan 14 13:35:23.905188 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:35:23.905343 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 13:35:23.908392 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 13:35:23.914071 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 13:35:23.920030 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 13:35:23.924100 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 13:35:23.934919 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 13:35:23.940158 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 13:35:23.948897 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 13:35:23.955075 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 13:35:23.961085 jq[1605]: false Jan 14 13:35:23.962062 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 13:35:23.968955 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 13:35:23.982227 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:23.980942 oslogin_cache_refresh[1607]: Refreshing passwd entry cache Jan 14 13:35:23.982733 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Refreshing passwd entry cache Jan 14 13:35:23.976252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 13:35:23.984084 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 13:35:23.994033 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 13:35:23.995845 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 13:35:23.996582 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 13:35:24.007782 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Failure getting users, quitting Jan 14 13:35:24.007782 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:35:24.007782 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Refreshing group entry cache Jan 14 13:35:24.007782 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Failure getting groups, quitting Jan 14 13:35:24.007782 google_oslogin_nss_cache[1607]: oslogin_cache_refresh[1607]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:35:24.005270 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 13:35:24.003999 oslogin_cache_refresh[1607]: Failure getting users, quitting Jan 14 13:35:24.004040 oslogin_cache_refresh[1607]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 13:35:24.004131 oslogin_cache_refresh[1607]: Refreshing group entry cache Jan 14 13:35:24.006858 oslogin_cache_refresh[1607]: Failure getting groups, quitting Jan 14 13:35:24.006893 oslogin_cache_refresh[1607]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 13:35:24.017910 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 13:35:24.029810 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 13:35:24.032083 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 13:35:24.032470 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 13:35:24.033112 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 13:35:24.033810 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 13:35:24.057860 update_engine[1616]: I20260114 13:35:24.055820 1616 main.cc:92] Flatcar Update Engine starting Jan 14 13:35:24.076789 jq[1617]: true Jan 14 13:35:24.082289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 13:35:24.082646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 13:35:24.090015 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 13:35:24.137667 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 13:35:24.153490 extend-filesystems[1606]: Found /dev/vda6 Jan 14 13:35:24.181997 extend-filesystems[1606]: Found /dev/vda9 Jan 14 13:35:24.190177 tar[1620]: linux-amd64/LICENSE Jan 14 13:35:24.190177 tar[1620]: linux-amd64/helm Jan 14 13:35:24.193735 jq[1639]: true Jan 14 13:35:24.194872 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 13:35:24.195315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 13:35:24.201470 extend-filesystems[1606]: Checking size of /dev/vda9 Jan 14 13:35:24.225486 dbus-daemon[1603]: [system] SELinux support is enabled Jan 14 13:35:24.225884 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 13:35:24.231298 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 13:35:24.231368 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 13:35:24.233529 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 13:35:24.233557 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 13:35:24.272170 extend-filesystems[1606]: Resized partition /dev/vda9 Jan 14 13:35:24.277718 extend-filesystems[1664]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 13:35:24.284409 dbus-daemon[1603]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 13:35:24.287955 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 14 13:35:24.292319 update_engine[1616]: I20260114 13:35:24.292198 1616 update_check_scheduler.cc:74] Next update check in 7m22s Jan 14 13:35:24.294663 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 13:35:24.295503 systemd[1]: Started update-engine.service - Update Engine. Jan 14 13:35:24.345462 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 13:35:24.395041 systemd-logind[1615]: Watching system buttons on /dev/input/event3 (Power Button) Jan 14 13:35:24.395094 systemd-logind[1615]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 13:35:24.432793 bash[1680]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:35:24.443036 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 13:35:24.469816 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 14 13:35:24.494283 extend-filesystems[1664]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 13:35:24.494283 extend-filesystems[1664]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 14 13:35:24.494283 extend-filesystems[1664]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 14 13:35:24.722873 extend-filesystems[1606]: Resized filesystem in /dev/vda9 Jan 14 13:35:24.711084 systemd-logind[1615]: New seat seat0. Jan 14 13:35:24.513509 dbus-daemon[1603]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 13:35:24.714769 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 13:35:24.514201 dbus-daemon[1603]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1665 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 13:35:24.721548 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 13:35:24.723840 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 13:35:24.824037 containerd[1649]: time="2026-01-14T13:35:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 13:35:24.835717 containerd[1649]: time="2026-01-14T13:35:24.830126076Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 13:35:24.881147 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 13:35:24.885922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 13:35:24.898926 containerd[1649]: time="2026-01-14T13:35:24.898866998Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="26.819µs" Jan 14 13:35:24.898926 containerd[1649]: time="2026-01-14T13:35:24.898921865Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 13:35:24.899061 containerd[1649]: time="2026-01-14T13:35:24.898988873Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 13:35:24.899061 containerd[1649]: time="2026-01-14T13:35:24.899011195Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 13:35:24.899376 containerd[1649]: time="2026-01-14T13:35:24.899322724Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 13:35:24.899376 containerd[1649]: time="2026-01-14T13:35:24.899376837Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:35:24.899536 containerd[1649]: time="2026-01-14T13:35:24.899497362Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 13:35:24.899536 containerd[1649]: time="2026-01-14T13:35:24.899528421Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.906005 containerd[1649]: time="2026-01-14T13:35:24.905943771Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.906072 containerd[1649]: time="2026-01-14T13:35:24.906009524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:35:24.906072 containerd[1649]: time="2026-01-14T13:35:24.906062324Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 13:35:24.906165 containerd[1649]: time="2026-01-14T13:35:24.906080087Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.909853 containerd[1649]: time="2026-01-14T13:35:24.909789566Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.909853 containerd[1649]: time="2026-01-14T13:35:24.909822168Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 13:35:24.911912 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 13:35:24.912927 containerd[1649]: time="2026-01-14T13:35:24.912893929Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.914778 containerd[1649]: time="2026-01-14T13:35:24.914712498Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.926999 containerd[1649]: time="2026-01-14T13:35:24.926940453Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 13:35:24.927375 containerd[1649]: time="2026-01-14T13:35:24.927314143Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 13:35:24.927455 containerd[1649]: time="2026-01-14T13:35:24.927407396Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 13:35:24.931205 containerd[1649]: time="2026-01-14T13:35:24.931130066Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 13:35:24.931415 containerd[1649]: time="2026-01-14T13:35:24.931387937Z" level=info msg="metadata content store policy set" policy=shared Jan 14 13:35:24.949849 containerd[1649]: time="2026-01-14T13:35:24.949803780Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 13:35:24.949970 containerd[1649]: time="2026-01-14T13:35:24.949918179Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:35:24.950094 containerd[1649]: time="2026-01-14T13:35:24.950038909Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 13:35:24.950094 containerd[1649]: time="2026-01-14T13:35:24.950069199Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 13:35:24.950094 containerd[1649]: time="2026-01-14T13:35:24.950091084Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 13:35:24.950196 containerd[1649]: time="2026-01-14T13:35:24.950110241Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 13:35:24.950196 containerd[1649]: time="2026-01-14T13:35:24.950127394Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 13:35:24.950196 containerd[1649]: time="2026-01-14T13:35:24.950142862Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 13:35:24.950196 containerd[1649]: time="2026-01-14T13:35:24.950161464Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 13:35:24.950196 containerd[1649]: time="2026-01-14T13:35:24.950179017Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 13:35:24.950398 containerd[1649]: time="2026-01-14T13:35:24.950196262Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 13:35:24.950398 containerd[1649]: time="2026-01-14T13:35:24.950223226Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 13:35:24.950398 containerd[1649]: time="2026-01-14T13:35:24.950241098Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 13:35:24.950398 containerd[1649]: time="2026-01-14T13:35:24.950258670Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 13:35:24.950527 containerd[1649]: time="2026-01-14T13:35:24.950489982Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 13:35:24.950565 containerd[1649]: time="2026-01-14T13:35:24.950533500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 13:35:24.950565 containerd[1649]: time="2026-01-14T13:35:24.950559164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 13:35:24.950645 containerd[1649]: time="2026-01-14T13:35:24.950589350Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 13:35:24.950645 containerd[1649]: time="2026-01-14T13:35:24.950608828Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 13:35:24.950645 containerd[1649]: time="2026-01-14T13:35:24.950624150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 13:35:24.950645 containerd[1649]: time="2026-01-14T13:35:24.950641268Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 13:35:24.950784 containerd[1649]: time="2026-01-14T13:35:24.950657635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 13:35:24.950784 containerd[1649]: time="2026-01-14T13:35:24.950680841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 13:35:24.950784 containerd[1649]: time="2026-01-14T13:35:24.950698458Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 13:35:24.950784 containerd[1649]: time="2026-01-14T13:35:24.950713635Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 13:35:24.956530 containerd[1649]: time="2026-01-14T13:35:24.956493309Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 13:35:24.960195 containerd[1649]: time="2026-01-14T13:35:24.956629237Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 13:35:24.960253 containerd[1649]: time="2026-01-14T13:35:24.960203592Z" level=info msg="Start snapshots syncer" Jan 14 13:35:24.963121 containerd[1649]: time="2026-01-14T13:35:24.960281404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 13:35:24.965241 containerd[1649]: time="2026-01-14T13:35:24.965186783Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 13:35:24.966084 containerd[1649]: time="2026-01-14T13:35:24.965308605Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 13:35:24.967855 containerd[1649]: time="2026-01-14T13:35:24.967824302Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.972253049Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.972297399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.972353314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.972372516Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.973708843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.973778198Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.973803672Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.973834651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.973871890Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.974988710Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.975039919Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.975058241Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.975088888Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 13:35:24.975769 containerd[1649]: time="2026-01-14T13:35:24.975102983Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975135768Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975167558Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975228502Z" level=info msg="runtime interface created" Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975241183Z" level=info msg="created NRI interface" Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975256026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975317153Z" level=info msg="Connect containerd service" Jan 14 13:35:24.976283 containerd[1649]: time="2026-01-14T13:35:24.975385822Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 13:35:24.976239 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 13:35:24.983140 systemd[1]: Starting sshkeys.service... Jan 14 13:35:24.984856 containerd[1649]: time="2026-01-14T13:35:24.984546834Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 13:35:25.048116 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 13:35:25.053159 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 13:35:25.106977 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:25.178371 containerd[1649]: time="2026-01-14T13:35:25.178287002Z" level=info msg="Start subscribing containerd event" Jan 14 13:35:25.178615 containerd[1649]: time="2026-01-14T13:35:25.178560813Z" level=info msg="Start recovering state" Jan 14 13:35:25.178718 containerd[1649]: time="2026-01-14T13:35:25.178687885Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 13:35:25.179073 containerd[1649]: time="2026-01-14T13:35:25.179050217Z" level=info msg="Start event monitor" Jan 14 13:35:25.179406 containerd[1649]: time="2026-01-14T13:35:25.179379641Z" level=info msg="Start cni network conf syncer for default" Jan 14 13:35:25.179895 containerd[1649]: time="2026-01-14T13:35:25.179670221Z" level=info msg="Start streaming server" Jan 14 13:35:25.179895 containerd[1649]: time="2026-01-14T13:35:25.179699961Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 13:35:25.181874 containerd[1649]: time="2026-01-14T13:35:25.180104422Z" level=info msg="runtime interface starting up..." Jan 14 13:35:25.181874 containerd[1649]: time="2026-01-14T13:35:25.180131670Z" level=info msg="starting plugins..." Jan 14 13:35:25.181874 containerd[1649]: time="2026-01-14T13:35:25.180497980Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 13:35:25.182113 containerd[1649]: time="2026-01-14T13:35:25.182031340Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 13:35:25.182481 containerd[1649]: time="2026-01-14T13:35:25.182449987Z" level=info msg="containerd successfully booted in 0.362326s" Jan 14 13:35:25.182652 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 13:35:25.241253 polkitd[1698]: Started polkitd version 126 Jan 14 13:35:25.249245 polkitd[1698]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 13:35:25.250225 polkitd[1698]: Loading rules from directory /run/polkit-1/rules.d Jan 14 13:35:25.250355 polkitd[1698]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 13:35:25.251035 polkitd[1698]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 13:35:25.251080 polkitd[1698]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 13:35:25.251140 polkitd[1698]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 13:35:25.252295 polkitd[1698]: Finished loading, compiling and executing 2 rules Jan 14 13:35:25.253153 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 13:35:25.256617 dbus-daemon[1603]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 13:35:25.258097 polkitd[1698]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 13:35:25.281656 systemd-hostnamed[1665]: Hostname set to (static) Jan 14 13:35:25.312096 systemd-networkd[1555]: eth0: Gained IPv6LL Jan 14 13:35:25.313532 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Jan 14 13:35:25.317742 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 13:35:25.320592 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 13:35:25.328140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:25.333783 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 13:35:25.410540 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 13:35:25.426261 tar[1620]: linux-amd64/README.md Jan 14 13:35:25.444194 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 13:35:25.662025 sshd_keygen[1653]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 13:35:25.696648 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 13:35:25.703084 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 13:35:25.716947 systemd[1]: Started sshd@0-10.230.49.6:22-68.220.241.50:47292.service - OpenSSH per-connection server daemon (68.220.241.50:47292). Jan 14 13:35:25.726833 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 13:35:25.727206 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 13:35:25.732954 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 13:35:25.767003 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 13:35:25.774024 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 13:35:25.777833 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 13:35:25.780203 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 13:35:25.998782 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:26.151059 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:26.269597 sshd[1746]: Accepted publickey for core from 68.220.241.50 port 47292 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:26.273210 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:26.289784 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 13:35:26.292476 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 13:35:26.312401 systemd-logind[1615]: New session 1 of user core. Jan 14 13:35:26.328495 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 13:35:26.336243 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 13:35:26.361597 (systemd)[1761]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:26.368218 systemd-logind[1615]: New session 2 of user core. Jan 14 13:35:26.458042 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:26.472242 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:26.485478 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Jan 14 13:35:26.487525 systemd-networkd[1555]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c41:24:19ff:fee6:3106/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c41:24:19ff:fee6:3106/64 assigned by NDisc. Jan 14 13:35:26.487536 systemd-networkd[1555]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 13:35:26.555083 systemd[1761]: Queued start job for default target default.target. Jan 14 13:35:26.560514 systemd[1761]: Created slice app.slice - User Application Slice. Jan 14 13:35:26.560701 systemd[1761]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 13:35:26.560885 systemd[1761]: Reached target paths.target - Paths. Jan 14 13:35:26.561088 systemd[1761]: Reached target timers.target - Timers. Jan 14 13:35:26.563499 systemd[1761]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 13:35:26.564892 systemd[1761]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 13:35:26.587983 systemd[1761]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 13:35:26.595365 systemd[1761]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 13:35:26.595538 systemd[1761]: Reached target sockets.target - Sockets. Jan 14 13:35:26.595611 systemd[1761]: Reached target basic.target - Basic System. Jan 14 13:35:26.595691 systemd[1761]: Reached target default.target - Main User Target. Jan 14 13:35:26.596120 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 13:35:26.596509 systemd[1761]: Startup finished in 219ms. Jan 14 13:35:26.606108 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 13:35:26.906257 systemd[1]: Started sshd@1-10.230.49.6:22-68.220.241.50:54550.service - OpenSSH per-connection server daemon (68.220.241.50:54550). Jan 14 13:35:27.099557 kubelet[1773]: E0114 13:35:27.099476 1773 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:27.102626 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:27.102928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:27.103597 systemd[1]: kubelet.service: Consumed 1.048s CPU time, 264M memory peak. Jan 14 13:35:27.414529 sshd[1787]: Accepted publickey for core from 68.220.241.50 port 54550 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:27.416310 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:27.423960 systemd-logind[1615]: New session 3 of user core. Jan 14 13:35:27.432169 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 13:35:27.689902 sshd[1793]: Connection closed by 68.220.241.50 port 54550 Jan 14 13:35:27.689534 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:27.696098 systemd-logind[1615]: Session 3 logged out. Waiting for processes to exit. Jan 14 13:35:27.696719 systemd[1]: sshd@1-10.230.49.6:22-68.220.241.50:54550.service: Deactivated successfully. Jan 14 13:35:27.699634 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 13:35:27.701676 systemd-logind[1615]: Removed session 3. Jan 14 13:35:27.795096 systemd[1]: Started sshd@2-10.230.49.6:22-68.220.241.50:54552.service - OpenSSH per-connection server daemon (68.220.241.50:54552). Jan 14 13:35:28.017801 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:28.163786 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:28.297227 sshd[1799]: Accepted publickey for core from 68.220.241.50 port 54552 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:28.298925 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:28.306197 systemd-logind[1615]: New session 4 of user core. Jan 14 13:35:28.319046 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 13:35:28.320877 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Jan 14 13:35:28.571919 sshd[1805]: Connection closed by 68.220.241.50 port 54552 Jan 14 13:35:28.572972 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:28.579605 systemd-logind[1615]: Session 4 logged out. Waiting for processes to exit. Jan 14 13:35:28.581178 systemd[1]: sshd@2-10.230.49.6:22-68.220.241.50:54552.service: Deactivated successfully. Jan 14 13:35:28.584189 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 13:35:28.587248 systemd-logind[1615]: Removed session 4. Jan 14 13:35:30.873158 login[1753]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:30.881100 systemd-logind[1615]: New session 5 of user core. Jan 14 13:35:30.893098 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 13:35:31.207379 login[1754]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:31.215509 systemd-logind[1615]: New session 6 of user core. Jan 14 13:35:31.231089 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 13:35:32.038809 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:32.052424 coreos-metadata[1602]: Jan 14 13:35:32.052 WARN failed to locate config-drive, using the metadata service API instead Jan 14 13:35:32.092389 coreos-metadata[1602]: Jan 14 13:35:32.092 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 13:35:32.101316 coreos-metadata[1602]: Jan 14 13:35:32.101 INFO Fetch failed with 404: resource not found Jan 14 13:35:32.101316 coreos-metadata[1602]: Jan 14 13:35:32.101 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 13:35:32.101885 coreos-metadata[1602]: Jan 14 13:35:32.101 INFO Fetch successful Jan 14 13:35:32.102070 coreos-metadata[1602]: Jan 14 13:35:32.102 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 13:35:32.115829 coreos-metadata[1602]: Jan 14 13:35:32.115 INFO Fetch successful Jan 14 13:35:32.116033 coreos-metadata[1602]: Jan 14 13:35:32.116 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 13:35:32.131273 coreos-metadata[1602]: Jan 14 13:35:32.131 INFO Fetch successful Jan 14 13:35:32.131273 coreos-metadata[1602]: Jan 14 13:35:32.131 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 13:35:32.147218 coreos-metadata[1602]: Jan 14 13:35:32.147 INFO Fetch successful Jan 14 13:35:32.147396 coreos-metadata[1602]: Jan 14 13:35:32.147 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 13:35:32.166703 coreos-metadata[1602]: Jan 14 13:35:32.166 INFO Fetch successful Jan 14 13:35:32.176784 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 13:35:32.194000 coreos-metadata[1707]: Jan 14 13:35:32.193 WARN failed to locate config-drive, using the metadata service API instead Jan 14 13:35:32.199455 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 13:35:32.200672 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 13:35:32.216875 coreos-metadata[1707]: Jan 14 13:35:32.216 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 13:35:32.242619 coreos-metadata[1707]: Jan 14 13:35:32.242 INFO Fetch successful Jan 14 13:35:32.243079 coreos-metadata[1707]: Jan 14 13:35:32.242 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 13:35:32.270663 coreos-metadata[1707]: Jan 14 13:35:32.270 INFO Fetch successful Jan 14 13:35:32.272682 unknown[1707]: wrote ssh authorized keys file for user: core Jan 14 13:35:32.294985 update-ssh-keys[1847]: Updated "/home/core/.ssh/authorized_keys" Jan 14 13:35:32.296667 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 13:35:32.301420 systemd[1]: Finished sshkeys.service. Jan 14 13:35:32.303375 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 13:35:32.303850 systemd[1]: Startup finished in 3.407s (kernel) + 13.391s (initrd) + 12.176s (userspace) = 28.975s. Jan 14 13:35:37.229531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 13:35:37.232208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:37.465647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:37.479397 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:37.548048 kubelet[1858]: E0114 13:35:37.547838 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:37.552642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:37.552902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:37.553828 systemd[1]: kubelet.service: Consumed 242ms CPU time, 109M memory peak. Jan 14 13:35:38.678382 systemd[1]: Started sshd@3-10.230.49.6:22-68.220.241.50:41594.service - OpenSSH per-connection server daemon (68.220.241.50:41594). Jan 14 13:35:39.186040 sshd[1866]: Accepted publickey for core from 68.220.241.50 port 41594 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:39.187734 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:39.194846 systemd-logind[1615]: New session 7 of user core. Jan 14 13:35:39.202120 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 13:35:39.460486 sshd[1870]: Connection closed by 68.220.241.50 port 41594 Jan 14 13:35:39.461271 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:39.467570 systemd[1]: sshd@3-10.230.49.6:22-68.220.241.50:41594.service: Deactivated successfully. Jan 14 13:35:39.470526 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 13:35:39.472332 systemd-logind[1615]: Session 7 logged out. Waiting for processes to exit. Jan 14 13:35:39.475096 systemd-logind[1615]: Removed session 7. Jan 14 13:35:39.562264 systemd[1]: Started sshd@4-10.230.49.6:22-68.220.241.50:41602.service - OpenSSH per-connection server daemon (68.220.241.50:41602). Jan 14 13:35:40.062752 sshd[1876]: Accepted publickey for core from 68.220.241.50 port 41602 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:40.064718 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:40.072444 systemd-logind[1615]: New session 8 of user core. Jan 14 13:35:40.082102 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 13:35:40.333131 sshd[1880]: Connection closed by 68.220.241.50 port 41602 Jan 14 13:35:40.334057 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:40.340657 systemd[1]: sshd@4-10.230.49.6:22-68.220.241.50:41602.service: Deactivated successfully. Jan 14 13:35:40.344011 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 13:35:40.346427 systemd-logind[1615]: Session 8 logged out. Waiting for processes to exit. Jan 14 13:35:40.347890 systemd-logind[1615]: Removed session 8. Jan 14 13:35:40.439128 systemd[1]: Started sshd@5-10.230.49.6:22-68.220.241.50:41604.service - OpenSSH per-connection server daemon (68.220.241.50:41604). Jan 14 13:35:40.979785 sshd[1886]: Accepted publickey for core from 68.220.241.50 port 41604 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:40.981734 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:40.989667 systemd-logind[1615]: New session 9 of user core. Jan 14 13:35:41.002138 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 13:35:41.256017 sshd[1890]: Connection closed by 68.220.241.50 port 41604 Jan 14 13:35:41.256939 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:41.263093 systemd[1]: sshd@5-10.230.49.6:22-68.220.241.50:41604.service: Deactivated successfully. Jan 14 13:35:41.265657 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 13:35:41.267142 systemd-logind[1615]: Session 9 logged out. Waiting for processes to exit. Jan 14 13:35:41.269591 systemd-logind[1615]: Removed session 9. Jan 14 13:35:41.367399 systemd[1]: Started sshd@6-10.230.49.6:22-68.220.241.50:41620.service - OpenSSH per-connection server daemon (68.220.241.50:41620). Jan 14 13:35:41.896786 sshd[1896]: Accepted publickey for core from 68.220.241.50 port 41620 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:41.898394 sshd-session[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:41.907190 systemd-logind[1615]: New session 10 of user core. Jan 14 13:35:41.913975 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 13:35:42.103935 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 13:35:42.104508 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:35:42.124706 sudo[1901]: pam_unix(sudo:session): session closed for user root Jan 14 13:35:42.217495 sshd[1900]: Connection closed by 68.220.241.50 port 41620 Jan 14 13:35:42.218729 sshd-session[1896]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:42.224868 systemd[1]: sshd@6-10.230.49.6:22-68.220.241.50:41620.service: Deactivated successfully. Jan 14 13:35:42.227291 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 13:35:42.228461 systemd-logind[1615]: Session 10 logged out. Waiting for processes to exit. Jan 14 13:35:42.230952 systemd-logind[1615]: Removed session 10. Jan 14 13:35:42.325917 systemd[1]: Started sshd@7-10.230.49.6:22-68.220.241.50:35936.service - OpenSSH per-connection server daemon (68.220.241.50:35936). Jan 14 13:35:42.847814 sshd[1908]: Accepted publickey for core from 68.220.241.50 port 35936 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:42.849428 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:42.856823 systemd-logind[1615]: New session 11 of user core. Jan 14 13:35:42.864003 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 13:35:43.041728 sudo[1914]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 13:35:43.042310 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:35:43.045428 sudo[1914]: pam_unix(sudo:session): session closed for user root Jan 14 13:35:43.054633 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 13:35:43.055110 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:35:43.069135 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 13:35:43.135242 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 14 13:35:43.135395 kernel: audit: type=1305 audit(1768397743.130:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:35:43.130000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 13:35:43.136078 kernel: audit: type=1300 audit(1768397743.130:226): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd31b23b90 a2=420 a3=0 items=0 ppid=1919 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:43.130000 audit[1938]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd31b23b90 a2=420 a3=0 items=0 ppid=1919 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:43.140995 augenrules[1938]: No rules Jan 14 13:35:43.141885 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 13:35:43.142331 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 13:35:43.130000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:35:43.143955 sudo[1913]: pam_unix(sudo:session): session closed for user root Jan 14 13:35:43.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.147766 kernel: audit: type=1327 audit(1768397743.130:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 13:35:43.147809 kernel: audit: type=1130 audit(1768397743.141:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.151784 kernel: audit: type=1131 audit(1768397743.141:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.143000 audit[1913]: USER_END pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.155740 kernel: audit: type=1106 audit(1768397743.143:229): pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.143000 audit[1913]: CRED_DISP pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.159894 kernel: audit: type=1104 audit(1768397743.143:230): pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.237051 sshd[1912]: Connection closed by 68.220.241.50 port 35936 Jan 14 13:35:43.237788 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Jan 14 13:35:43.238000 audit[1908]: USER_END pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.243103 systemd-logind[1615]: Session 11 logged out. Waiting for processes to exit. Jan 14 13:35:43.239000 audit[1908]: CRED_DISP pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.246328 systemd[1]: sshd@7-10.230.49.6:22-68.220.241.50:35936.service: Deactivated successfully. Jan 14 13:35:43.249175 kernel: audit: type=1106 audit(1768397743.238:231): pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.249239 kernel: audit: type=1104 audit(1768397743.239:232): pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.250144 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 13:35:43.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.49.6:22-68.220.241.50:35936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.252141 kernel: audit: type=1131 audit(1768397743.246:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.49.6:22-68.220.241.50:35936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.253803 systemd-logind[1615]: Removed session 11. Jan 14 13:35:43.340575 systemd[1]: Started sshd@8-10.230.49.6:22-68.220.241.50:35950.service - OpenSSH per-connection server daemon (68.220.241.50:35950). Jan 14 13:35:43.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.49.6:22-68.220.241.50:35950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:43.846000 audit[1947]: USER_ACCT pid=1947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.847864 sshd[1947]: Accepted publickey for core from 68.220.241.50 port 35950 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:35:43.847000 audit[1947]: CRED_ACQ pid=1947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.847000 audit[1947]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf57317a0 a2=3 a3=0 items=0 ppid=1 pid=1947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:43.847000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:35:43.849414 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:35:43.856263 systemd-logind[1615]: New session 12 of user core. Jan 14 13:35:43.866990 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 13:35:43.871000 audit[1947]: USER_START pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:43.873000 audit[1951]: CRED_ACQ pid=1951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:35:44.034000 audit[1952]: USER_ACCT pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:44.035679 sudo[1952]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 13:35:44.035000 audit[1952]: CRED_REFR pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:44.035000 audit[1952]: USER_START pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:35:44.036192 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 13:35:44.563282 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 13:35:44.585223 (dockerd)[1972]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 13:35:44.984086 dockerd[1972]: time="2026-01-14T13:35:44.984013163Z" level=info msg="Starting up" Jan 14 13:35:44.986583 dockerd[1972]: time="2026-01-14T13:35:44.986555502Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 13:35:45.004173 dockerd[1972]: time="2026-01-14T13:35:45.004113935Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 13:35:45.040690 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4022512237-merged.mount: Deactivated successfully. Jan 14 13:35:45.052363 systemd[1]: var-lib-docker-metacopy\x2dcheck2826470857-merged.mount: Deactivated successfully. Jan 14 13:35:45.076311 dockerd[1972]: time="2026-01-14T13:35:45.076251350Z" level=info msg="Loading containers: start." Jan 14 13:35:45.091839 kernel: Initializing XFRM netlink socket Jan 14 13:35:45.176000 audit[2023]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.176000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7fffff5bbac0 a2=0 a3=0 items=0 ppid=1972 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.176000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:35:45.179000 audit[2025]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.179000 audit[2025]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fffdfe7e5c0 a2=0 a3=0 items=0 ppid=1972 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.179000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:35:45.182000 audit[2027]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.182000 audit[2027]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd23fc4490 a2=0 a3=0 items=0 ppid=1972 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:35:45.186000 audit[2029]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.186000 audit[2029]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee30f8c70 a2=0 a3=0 items=0 ppid=1972 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:35:45.190000 audit[2031]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.190000 audit[2031]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc8efbd240 a2=0 a3=0 items=0 ppid=1972 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.190000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:35:45.194000 audit[2033]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.194000 audit[2033]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff23a40240 a2=0 a3=0 items=0 ppid=1972 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.194000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:35:45.197000 audit[2035]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.197000 audit[2035]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd0bac7340 a2=0 a3=0 items=0 ppid=1972 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:35:45.200000 audit[2037]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.200000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe513d5f50 a2=0 a3=0 items=0 ppid=1972 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:35:45.247000 audit[2040]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.247000 audit[2040]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffd2dfe6c10 a2=0 a3=0 items=0 ppid=1972 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.247000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 13:35:45.251000 audit[2042]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.251000 audit[2042]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffff29840b0 a2=0 a3=0 items=0 ppid=1972 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.251000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:35:45.254000 audit[2044]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.254000 audit[2044]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffebfe13250 a2=0 a3=0 items=0 ppid=1972 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:35:45.258000 audit[2046]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.258000 audit[2046]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd6794d1c0 a2=0 a3=0 items=0 ppid=1972 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:35:45.261000 audit[2048]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.261000 audit[2048]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd96db6170 a2=0 a3=0 items=0 ppid=1972 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:35:45.315000 audit[2078]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.315000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe105f1ae0 a2=0 a3=0 items=0 ppid=1972 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.315000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 13:35:45.318000 audit[2080]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.318000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffddc5e9f20 a2=0 a3=0 items=0 ppid=1972 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 13:35:45.321000 audit[2082]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.321000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc76a05d90 a2=0 a3=0 items=0 ppid=1972 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 13:35:45.324000 audit[2084]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.324000 audit[2084]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd8d2b0f20 a2=0 a3=0 items=0 ppid=1972 pid=2084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 13:35:45.327000 audit[2086]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.327000 audit[2086]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffccfeb9ca0 a2=0 a3=0 items=0 ppid=1972 pid=2086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 13:35:45.331000 audit[2088]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2088 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.331000 audit[2088]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff1624a630 a2=0 a3=0 items=0 ppid=1972 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.331000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:35:45.334000 audit[2090]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.334000 audit[2090]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff909af630 a2=0 a3=0 items=0 ppid=1972 pid=2090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.334000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:35:45.337000 audit[2092]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.337000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffd709410c0 a2=0 a3=0 items=0 ppid=1972 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 13:35:45.341000 audit[2094]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.341000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe0de60db0 a2=0 a3=0 items=0 ppid=1972 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 13:35:45.344000 audit[2096]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.344000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffc7f43e30 a2=0 a3=0 items=0 ppid=1972 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 13:35:45.348000 audit[2098]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.348000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcbac4f050 a2=0 a3=0 items=0 ppid=1972 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.348000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 13:35:45.351000 audit[2100]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.351000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffc58e1b870 a2=0 a3=0 items=0 ppid=1972 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.351000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 13:35:45.355000 audit[2102]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.355000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc9467b5d0 a2=0 a3=0 items=0 ppid=1972 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 13:35:45.363000 audit[2107]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.363000 audit[2107]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc5bf5e300 a2=0 a3=0 items=0 ppid=1972 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.363000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:35:45.366000 audit[2109]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2109 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.366000 audit[2109]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff7a401770 a2=0 a3=0 items=0 ppid=1972 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.366000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:35:45.369000 audit[2111]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.369000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff2cafcf60 a2=0 a3=0 items=0 ppid=1972 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.369000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:35:45.372000 audit[2113]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.372000 audit[2113]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd298d5ff0 a2=0 a3=0 items=0 ppid=1972 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.372000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 13:35:45.375000 audit[2115]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.375000 audit[2115]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffba180950 a2=0 a3=0 items=0 ppid=1972 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.375000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 13:35:45.378000 audit[2117]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:35:45.378000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd5a6a9a20 a2=0 a3=0 items=0 ppid=1972 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 13:35:45.388066 systemd-timesyncd[1524]: Network configuration changed, trying to establish connection. Jan 14 13:35:45.415000 audit[2121]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.415000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffd852bfb30 a2=0 a3=0 items=0 ppid=1972 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.415000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 13:35:45.419000 audit[2123]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.419000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd5efde4c0 a2=0 a3=0 items=0 ppid=1972 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.419000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 13:35:45.432000 audit[2131]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.432000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffef4241390 a2=0 a3=0 items=0 ppid=1972 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.432000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 13:35:45.445000 audit[2137]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.445000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe4195a500 a2=0 a3=0 items=0 ppid=1972 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.445000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 13:35:45.449000 audit[2139]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.449000 audit[2139]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffe4e421e70 a2=0 a3=0 items=0 ppid=1972 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.449000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 13:35:45.452000 audit[2141]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.452000 audit[2141]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeed9a8b00 a2=0 a3=0 items=0 ppid=1972 pid=2141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.452000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 13:35:45.455000 audit[2143]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.455000 audit[2143]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffa15d4600 a2=0 a3=0 items=0 ppid=1972 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.455000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 13:35:45.459000 audit[2145]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:35:45.459000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffeff172860 a2=0 a3=0 items=0 ppid=1972 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:35:45.459000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 13:35:45.460912 systemd-networkd[1555]: docker0: Link UP Jan 14 13:35:45.464742 dockerd[1972]: time="2026-01-14T13:35:45.464424669Z" level=info msg="Loading containers: done." Jan 14 13:35:45.490563 dockerd[1972]: time="2026-01-14T13:35:45.490488808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 13:35:45.490836 dockerd[1972]: time="2026-01-14T13:35:45.490603619Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 13:35:45.490836 dockerd[1972]: time="2026-01-14T13:35:45.490776916Z" level=info msg="Initializing buildkit" Jan 14 13:35:45.519827 dockerd[1972]: time="2026-01-14T13:35:45.519635143Z" level=info msg="Completed buildkit initialization" Jan 14 13:35:45.532019 dockerd[1972]: time="2026-01-14T13:35:45.530629386Z" level=info msg="Daemon has completed initialization" Jan 14 13:35:45.532019 dockerd[1972]: time="2026-01-14T13:35:45.530773441Z" level=info msg="API listen on /run/docker.sock" Jan 14 13:35:45.531601 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 13:35:45.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:46.034618 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1021970068-merged.mount: Deactivated successfully. Jan 14 13:35:46.699872 containerd[1649]: time="2026-01-14T13:35:46.699645381Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 13:35:47.727550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 13:35:47.731152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:48.375978 systemd-timesyncd[1524]: Contacted time server [2a01:7e00::f03c:91ff:fe89:410f]:123 (2.flatcar.pool.ntp.org). Jan 14 13:35:48.376062 systemd-timesyncd[1524]: Initial clock synchronization to Wed 2026-01-14 13:35:48.374806 UTC. Jan 14 13:35:48.376152 systemd-resolved[1307]: Clock change detected. Flushing caches. Jan 14 13:35:48.677839 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:48.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:48.692176 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:48.716485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount855614531.mount: Deactivated successfully. Jan 14 13:35:48.771539 kubelet[2199]: E0114 13:35:48.771447 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:48.775807 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:48.776051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:48.784490 kernel: kauditd_printk_skb: 133 callbacks suppressed Jan 14 13:35:48.784780 kernel: audit: type=1131 audit(1768397748.775:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:35:48.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:35:48.777122 systemd[1]: kubelet.service: Consumed 220ms CPU time, 110M memory peak. Jan 14 13:35:50.478606 containerd[1649]: time="2026-01-14T13:35:50.477542255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:50.479457 containerd[1649]: time="2026-01-14T13:35:50.479418846Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=28337801" Jan 14 13:35:50.480474 containerd[1649]: time="2026-01-14T13:35:50.479790568Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:50.484763 containerd[1649]: time="2026-01-14T13:35:50.484080250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:50.485724 containerd[1649]: time="2026-01-14T13:35:50.485688491Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.148129184s" Jan 14 13:35:50.485818 containerd[1649]: time="2026-01-14T13:35:50.485762008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 13:35:50.487958 containerd[1649]: time="2026-01-14T13:35:50.487862674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 13:35:53.670172 containerd[1649]: time="2026-01-14T13:35:53.670007622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:53.672351 containerd[1649]: time="2026-01-14T13:35:53.672312288Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 13:35:53.673028 containerd[1649]: time="2026-01-14T13:35:53.672963638Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:53.677342 containerd[1649]: time="2026-01-14T13:35:53.677253303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:53.680010 containerd[1649]: time="2026-01-14T13:35:53.679887858Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 3.191794115s" Jan 14 13:35:53.680010 containerd[1649]: time="2026-01-14T13:35:53.679944817Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 13:35:53.681030 containerd[1649]: time="2026-01-14T13:35:53.680906789Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 13:35:55.556003 containerd[1649]: time="2026-01-14T13:35:55.555944551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:55.557321 containerd[1649]: time="2026-01-14T13:35:55.557202256Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 14 13:35:55.558005 containerd[1649]: time="2026-01-14T13:35:55.557969608Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:55.561313 containerd[1649]: time="2026-01-14T13:35:55.561244535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:55.562869 containerd[1649]: time="2026-01-14T13:35:55.562729741Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.881698974s" Jan 14 13:35:55.562869 containerd[1649]: time="2026-01-14T13:35:55.562766026Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 13:35:55.563832 containerd[1649]: time="2026-01-14T13:35:55.563623087Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 13:35:57.146064 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 13:35:57.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:57.153602 kernel: audit: type=1131 audit(1768397757.145:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:57.164000 audit: BPF prog-id=61 op=UNLOAD Jan 14 13:35:57.167597 kernel: audit: type=1334 audit(1768397757.164:287): prog-id=61 op=UNLOAD Jan 14 13:35:57.558651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484003595.mount: Deactivated successfully. Jan 14 13:35:58.539581 containerd[1649]: time="2026-01-14T13:35:58.539503009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:58.540941 containerd[1649]: time="2026-01-14T13:35:58.540705915Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 13:35:58.541698 containerd[1649]: time="2026-01-14T13:35:58.541637574Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:58.545733 containerd[1649]: time="2026-01-14T13:35:58.544855435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:35:58.545733 containerd[1649]: time="2026-01-14T13:35:58.545587664Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.981911497s" Jan 14 13:35:58.545733 containerd[1649]: time="2026-01-14T13:35:58.545621064Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 13:35:58.546744 containerd[1649]: time="2026-01-14T13:35:58.546529422Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 13:35:58.867165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 13:35:58.870926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:35:59.183707 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:35:59.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:59.191596 kernel: audit: type=1130 audit(1768397759.182:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:35:59.199965 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:35:59.275634 kubelet[2286]: E0114 13:35:59.274935 2286 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:35:59.279171 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:35:59.279681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:35:59.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:35:59.280685 systemd[1]: kubelet.service: Consumed 244ms CPU time, 109M memory peak. Jan 14 13:35:59.284588 kernel: audit: type=1131 audit(1768397759.279:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:35:59.529164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3491483492.mount: Deactivated successfully. Jan 14 13:36:00.893551 containerd[1649]: time="2026-01-14T13:36:00.893469645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:00.895604 containerd[1649]: time="2026-01-14T13:36:00.895543494Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 14 13:36:00.896771 containerd[1649]: time="2026-01-14T13:36:00.896716253Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:00.900991 containerd[1649]: time="2026-01-14T13:36:00.900926704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:00.903535 containerd[1649]: time="2026-01-14T13:36:00.903471812Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.356557829s" Jan 14 13:36:00.903535 containerd[1649]: time="2026-01-14T13:36:00.903519417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 13:36:00.904785 containerd[1649]: time="2026-01-14T13:36:00.904759226Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 13:36:01.902346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2470932553.mount: Deactivated successfully. Jan 14 13:36:01.910385 containerd[1649]: time="2026-01-14T13:36:01.909271581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:01.910385 containerd[1649]: time="2026-01-14T13:36:01.910328385Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 13:36:01.911181 containerd[1649]: time="2026-01-14T13:36:01.911147387Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:01.913085 containerd[1649]: time="2026-01-14T13:36:01.913053586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 13:36:01.914120 containerd[1649]: time="2026-01-14T13:36:01.914083685Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.009289045s" Jan 14 13:36:01.914205 containerd[1649]: time="2026-01-14T13:36:01.914124925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 13:36:01.914889 containerd[1649]: time="2026-01-14T13:36:01.914823074Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 13:36:03.018320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197000061.mount: Deactivated successfully. Jan 14 13:36:07.537600 containerd[1649]: time="2026-01-14T13:36:07.536420823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:07.538135 containerd[1649]: time="2026-01-14T13:36:07.537713478Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 14 13:36:07.539353 containerd[1649]: time="2026-01-14T13:36:07.538621007Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:07.542059 containerd[1649]: time="2026-01-14T13:36:07.541988353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:07.543666 containerd[1649]: time="2026-01-14T13:36:07.543466834Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.628572879s" Jan 14 13:36:07.543666 containerd[1649]: time="2026-01-14T13:36:07.543517640Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 13:36:09.367289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 13:36:09.371798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:09.649892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:09.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:09.656585 kernel: audit: type=1130 audit(1768397769.648:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:09.665143 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 13:36:09.730505 kubelet[2432]: E0114 13:36:09.730398 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 13:36:09.733325 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 13:36:09.733636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 13:36:09.734259 systemd[1]: kubelet.service: Consumed 207ms CPU time, 110.3M memory peak. Jan 14 13:36:09.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:36:09.738589 kernel: audit: type=1131 audit(1768397769.732:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:36:09.973160 update_engine[1616]: I20260114 13:36:09.971561 1616 update_attempter.cc:509] Updating boot flags... Jan 14 13:36:12.155709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:12.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:12.157888 systemd[1]: kubelet.service: Consumed 207ms CPU time, 110.3M memory peak. Jan 14 13:36:12.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:12.163252 kernel: audit: type=1130 audit(1768397772.154:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:12.163373 kernel: audit: type=1131 audit(1768397772.154:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:12.167020 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:12.204954 systemd[1]: Reload requested from client PID 2462 ('systemctl') (unit session-12.scope)... Jan 14 13:36:12.205218 systemd[1]: Reloading... Jan 14 13:36:12.341599 zram_generator::config[2510]: No configuration found. Jan 14 13:36:12.702024 systemd[1]: Reloading finished in 495 ms. Jan 14 13:36:12.732000 audit: BPF prog-id=65 op=LOAD Jan 14 13:36:12.740042 kernel: audit: type=1334 audit(1768397772.732:294): prog-id=65 op=LOAD Jan 14 13:36:12.740128 kernel: audit: type=1334 audit(1768397772.734:295): prog-id=53 op=UNLOAD Jan 14 13:36:12.734000 audit: BPF prog-id=53 op=UNLOAD Jan 14 13:36:12.744000 audit: BPF prog-id=66 op=LOAD Jan 14 13:36:12.744000 audit: BPF prog-id=58 op=UNLOAD Jan 14 13:36:12.750188 kernel: audit: type=1334 audit(1768397772.744:296): prog-id=66 op=LOAD Jan 14 13:36:12.750253 kernel: audit: type=1334 audit(1768397772.744:297): prog-id=58 op=UNLOAD Jan 14 13:36:12.750632 kernel: audit: type=1334 audit(1768397772.744:298): prog-id=67 op=LOAD Jan 14 13:36:12.744000 audit: BPF prog-id=67 op=LOAD Jan 14 13:36:12.744000 audit: BPF prog-id=68 op=LOAD Jan 14 13:36:12.753441 kernel: audit: type=1334 audit(1768397772.744:299): prog-id=68 op=LOAD Jan 14 13:36:12.744000 audit: BPF prog-id=59 op=UNLOAD Jan 14 13:36:12.744000 audit: BPF prog-id=60 op=UNLOAD Jan 14 13:36:12.747000 audit: BPF prog-id=69 op=LOAD Jan 14 13:36:12.747000 audit: BPF prog-id=41 op=UNLOAD Jan 14 13:36:12.747000 audit: BPF prog-id=70 op=LOAD Jan 14 13:36:12.747000 audit: BPF prog-id=71 op=LOAD Jan 14 13:36:12.747000 audit: BPF prog-id=42 op=UNLOAD Jan 14 13:36:12.747000 audit: BPF prog-id=43 op=UNLOAD Jan 14 13:36:12.748000 audit: BPF prog-id=72 op=LOAD Jan 14 13:36:12.748000 audit: BPF prog-id=56 op=UNLOAD Jan 14 13:36:12.749000 audit: BPF prog-id=73 op=LOAD Jan 14 13:36:12.756000 audit: BPF prog-id=64 op=UNLOAD Jan 14 13:36:12.756000 audit: BPF prog-id=74 op=LOAD Jan 14 13:36:12.756000 audit: BPF prog-id=47 op=UNLOAD Jan 14 13:36:12.757000 audit: BPF prog-id=75 op=LOAD Jan 14 13:36:12.757000 audit: BPF prog-id=76 op=LOAD Jan 14 13:36:12.757000 audit: BPF prog-id=48 op=UNLOAD Jan 14 13:36:12.757000 audit: BPF prog-id=49 op=UNLOAD Jan 14 13:36:12.760000 audit: BPF prog-id=77 op=LOAD Jan 14 13:36:12.760000 audit: BPF prog-id=50 op=UNLOAD Jan 14 13:36:12.760000 audit: BPF prog-id=78 op=LOAD Jan 14 13:36:12.760000 audit: BPF prog-id=79 op=LOAD Jan 14 13:36:12.760000 audit: BPF prog-id=51 op=UNLOAD Jan 14 13:36:12.760000 audit: BPF prog-id=52 op=UNLOAD Jan 14 13:36:12.763000 audit: BPF prog-id=80 op=LOAD Jan 14 13:36:12.763000 audit: BPF prog-id=57 op=UNLOAD Jan 14 13:36:12.763000 audit: BPF prog-id=81 op=LOAD Jan 14 13:36:12.763000 audit: BPF prog-id=82 op=LOAD Jan 14 13:36:12.763000 audit: BPF prog-id=54 op=UNLOAD Jan 14 13:36:12.763000 audit: BPF prog-id=55 op=UNLOAD Jan 14 13:36:12.765000 audit: BPF prog-id=83 op=LOAD Jan 14 13:36:12.765000 audit: BPF prog-id=44 op=UNLOAD Jan 14 13:36:12.765000 audit: BPF prog-id=84 op=LOAD Jan 14 13:36:12.765000 audit: BPF prog-id=85 op=LOAD Jan 14 13:36:12.765000 audit: BPF prog-id=45 op=UNLOAD Jan 14 13:36:12.765000 audit: BPF prog-id=46 op=UNLOAD Jan 14 13:36:12.790233 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 13:36:12.790372 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 13:36:12.790821 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:12.790923 systemd[1]: kubelet.service: Consumed 145ms CPU time, 98.5M memory peak. Jan 14 13:36:12.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 13:36:12.793340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:12.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:12.974617 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:12.991029 (kubelet)[2577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:36:13.087800 kubelet[2577]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:13.087800 kubelet[2577]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:36:13.087800 kubelet[2577]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:13.087800 kubelet[2577]: I0114 13:36:13.087181 2577 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:36:13.624339 kubelet[2577]: I0114 13:36:13.624262 2577 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 13:36:13.624339 kubelet[2577]: I0114 13:36:13.624303 2577 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:36:13.624763 kubelet[2577]: I0114 13:36:13.624697 2577 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 13:36:13.660720 kubelet[2577]: E0114 13:36:13.660646 2577 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.49.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:13.662474 kubelet[2577]: I0114 13:36:13.661521 2577 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:36:13.682909 kubelet[2577]: I0114 13:36:13.682871 2577 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:36:13.696126 kubelet[2577]: I0114 13:36:13.696031 2577 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:36:13.702041 kubelet[2577]: I0114 13:36:13.701967 2577 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:36:13.702300 kubelet[2577]: I0114 13:36:13.702036 2577 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-414dr.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:36:13.704720 kubelet[2577]: I0114 13:36:13.704625 2577 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:36:13.704720 kubelet[2577]: I0114 13:36:13.704656 2577 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 13:36:13.705912 kubelet[2577]: I0114 13:36:13.705859 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:13.709868 kubelet[2577]: I0114 13:36:13.709809 2577 kubelet.go:446] "Attempting to sync node with API server" Jan 14 13:36:13.709868 kubelet[2577]: I0114 13:36:13.709866 2577 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:36:13.711389 kubelet[2577]: I0114 13:36:13.711341 2577 kubelet.go:352] "Adding apiserver pod source" Jan 14 13:36:13.711389 kubelet[2577]: I0114 13:36:13.711386 2577 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:36:13.715707 kubelet[2577]: W0114 13:36:13.715047 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.49.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-414dr.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:13.715707 kubelet[2577]: E0114 13:36:13.715140 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.49.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-414dr.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:13.715707 kubelet[2577]: W0114 13:36:13.715658 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.49.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:13.715997 kubelet[2577]: E0114 13:36:13.715966 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.49.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:13.717379 kubelet[2577]: I0114 13:36:13.717353 2577 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:36:13.720607 kubelet[2577]: I0114 13:36:13.720562 2577 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:36:13.720876 kubelet[2577]: W0114 13:36:13.720857 2577 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 13:36:13.723415 kubelet[2577]: I0114 13:36:13.723139 2577 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:36:13.723415 kubelet[2577]: I0114 13:36:13.723193 2577 server.go:1287] "Started kubelet" Jan 14 13:36:13.734188 kubelet[2577]: I0114 13:36:13.733892 2577 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:36:13.740013 kubelet[2577]: I0114 13:36:13.739912 2577 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:36:13.740668 kubelet[2577]: I0114 13:36:13.740645 2577 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:36:13.745243 kubelet[2577]: I0114 13:36:13.745204 2577 server.go:479] "Adding debug handlers to kubelet server" Jan 14 13:36:13.746529 kubelet[2577]: E0114 13:36:13.743767 2577 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.49.6:6443/api/v1/namespaces/default/events\": dial tcp 10.230.49.6:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-414dr.gb1.brightbox.com.188a9c67e45c5e43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-414dr.gb1.brightbox.com,UID:srv-414dr.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-414dr.gb1.brightbox.com,},FirstTimestamp:2026-01-14 13:36:13.723164227 +0000 UTC m=+0.727475512,LastTimestamp:2026-01-14 13:36:13.723164227 +0000 UTC m=+0.727475512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-414dr.gb1.brightbox.com,}" Jan 14 13:36:13.748809 kubelet[2577]: I0114 13:36:13.748787 2577 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:36:13.750584 kubelet[2577]: I0114 13:36:13.750321 2577 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:36:13.753298 kubelet[2577]: I0114 13:36:13.753274 2577 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:36:13.753553 kubelet[2577]: I0114 13:36:13.753531 2577 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:36:13.753765 kubelet[2577]: I0114 13:36:13.753747 2577 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:36:13.754383 kubelet[2577]: W0114 13:36:13.754329 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.49.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:13.754525 kubelet[2577]: E0114 13:36:13.754498 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.49.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:13.754904 kubelet[2577]: E0114 13:36:13.754862 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:13.755118 kubelet[2577]: E0114 13:36:13.755087 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-414dr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.6:6443: connect: connection refused" interval="200ms" Jan 14 13:36:13.762417 kubelet[2577]: I0114 13:36:13.762391 2577 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:36:13.762552 kubelet[2577]: I0114 13:36:13.762526 2577 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:36:13.763718 kubelet[2577]: E0114 13:36:13.763679 2577 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:36:13.767419 kubelet[2577]: I0114 13:36:13.767399 2577 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:36:13.767000 audit[2589]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.767000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd2b3bd140 a2=0 a3=0 items=0 ppid=2577 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:36:13.773000 audit[2592]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.773000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc32d41810 a2=0 a3=0 items=0 ppid=2577 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:36:13.783000 audit[2594]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.783000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb537f210 a2=0 a3=0 items=0 ppid=2577 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:36:13.791000 audit[2596]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.791000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe5ff50b60 a2=0 a3=0 items=0 ppid=2577 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.791000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:36:13.806000 audit[2601]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.806000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff935bdfc0 a2=0 a3=0 items=0 ppid=2577 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 13:36:13.808449 kubelet[2577]: I0114 13:36:13.808423 2577 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:36:13.808590 kubelet[2577]: I0114 13:36:13.808553 2577 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:36:13.809015 kubelet[2577]: I0114 13:36:13.808700 2577 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:13.809521 kubelet[2577]: I0114 13:36:13.809480 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:36:13.810745 kubelet[2577]: I0114 13:36:13.810725 2577 policy_none.go:49] "None policy: Start" Jan 14 13:36:13.810856 kubelet[2577]: I0114 13:36:13.810837 2577 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:36:13.810999 kubelet[2577]: I0114 13:36:13.810981 2577 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:36:13.812824 kubelet[2577]: I0114 13:36:13.812801 2577 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:36:13.812987 kubelet[2577]: I0114 13:36:13.812967 2577 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 13:36:13.813151 kubelet[2577]: I0114 13:36:13.813129 2577 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:36:13.810000 audit[2602]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:13.810000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff8a768f70 a2=0 a3=0 items=0 ppid=2577 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 13:36:13.811000 audit[2603]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.811000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcd78458f0 a2=0 a3=0 items=0 ppid=2577 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:36:13.814128 kubelet[2577]: I0114 13:36:13.813785 2577 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 13:36:13.814128 kubelet[2577]: E0114 13:36:13.813872 2577 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:36:13.815000 audit[2608]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.815000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffae5b1910 a2=0 a3=0 items=0 ppid=2577 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:36:13.817000 audit[2607]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:13.817000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffecae28140 a2=0 a3=0 items=0 ppid=2577 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 13:36:13.818000 audit[2609]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:13.818000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc898bde70 a2=0 a3=0 items=0 ppid=2577 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:36:13.822481 kubelet[2577]: W0114 13:36:13.822348 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.49.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:13.822481 kubelet[2577]: E0114 13:36:13.822433 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.49.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:13.826613 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 13:36:13.827000 audit[2610]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:13.827000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff440d5760 a2=0 a3=0 items=0 ppid=2577 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 13:36:13.830000 audit[2611]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:13.830000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc67a17470 a2=0 a3=0 items=0 ppid=2577 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:13.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 13:36:13.854055 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 13:36:13.855365 kubelet[2577]: E0114 13:36:13.855321 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:13.860002 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 13:36:13.879896 kubelet[2577]: I0114 13:36:13.879078 2577 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:36:13.879896 kubelet[2577]: I0114 13:36:13.879362 2577 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:36:13.879896 kubelet[2577]: I0114 13:36:13.879395 2577 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:36:13.882061 kubelet[2577]: I0114 13:36:13.880491 2577 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:36:13.883889 kubelet[2577]: E0114 13:36:13.883784 2577 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:36:13.883889 kubelet[2577]: E0114 13:36:13.883853 2577 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:13.928583 systemd[1]: Created slice kubepods-burstable-pod337947e7d6022e4195bbfe1639fcc305.slice - libcontainer container kubepods-burstable-pod337947e7d6022e4195bbfe1639fcc305.slice. Jan 14 13:36:13.941096 kubelet[2577]: E0114 13:36:13.941053 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.947229 systemd[1]: Created slice kubepods-burstable-pod55193cb343c8b2562b8435b678e1e435.slice - libcontainer container kubepods-burstable-pod55193cb343c8b2562b8435b678e1e435.slice. Jan 14 13:36:13.955863 kubelet[2577]: I0114 13:36:13.955285 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-usr-share-ca-certificates\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.955863 kubelet[2577]: I0114 13:36:13.955355 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-ca-certs\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.955863 kubelet[2577]: I0114 13:36:13.955389 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-flexvolume-dir\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.955863 kubelet[2577]: I0114 13:36:13.955413 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-k8s-certs\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.955863 kubelet[2577]: I0114 13:36:13.955438 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-ca-certs\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.956250 kubelet[2577]: I0114 13:36:13.955464 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-k8s-certs\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.956250 kubelet[2577]: I0114 13:36:13.955508 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-kubeconfig\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.956250 kubelet[2577]: I0114 13:36:13.955540 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.956250 kubelet[2577]: I0114 13:36:13.955610 2577 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/947cbb413863a46c4597b86dc5b28a71-kubeconfig\") pod \"kube-scheduler-srv-414dr.gb1.brightbox.com\" (UID: \"947cbb413863a46c4597b86dc5b28a71\") " pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.956250 kubelet[2577]: E0114 13:36:13.955829 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-414dr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.6:6443: connect: connection refused" interval="400ms" Jan 14 13:36:13.958548 kubelet[2577]: E0114 13:36:13.958505 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.962798 systemd[1]: Created slice kubepods-burstable-pod947cbb413863a46c4597b86dc5b28a71.slice - libcontainer container kubepods-burstable-pod947cbb413863a46c4597b86dc5b28a71.slice. Jan 14 13:36:13.965575 kubelet[2577]: E0114 13:36:13.965540 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.982796 kubelet[2577]: I0114 13:36:13.982740 2577 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:13.983285 kubelet[2577]: E0114 13:36:13.983227 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.6:6443/api/v1/nodes\": dial tcp 10.230.49.6:6443: connect: connection refused" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:14.186671 kubelet[2577]: I0114 13:36:14.186512 2577 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:14.187895 kubelet[2577]: E0114 13:36:14.187082 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.6:6443/api/v1/nodes\": dial tcp 10.230.49.6:6443: connect: connection refused" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:14.243528 containerd[1649]: time="2026-01-14T13:36:14.243473608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-414dr.gb1.brightbox.com,Uid:337947e7d6022e4195bbfe1639fcc305,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:14.260447 containerd[1649]: time="2026-01-14T13:36:14.260408168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-414dr.gb1.brightbox.com,Uid:55193cb343c8b2562b8435b678e1e435,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:14.267053 containerd[1649]: time="2026-01-14T13:36:14.267015456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-414dr.gb1.brightbox.com,Uid:947cbb413863a46c4597b86dc5b28a71,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:14.356603 kubelet[2577]: E0114 13:36:14.356536 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-414dr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.6:6443: connect: connection refused" interval="800ms" Jan 14 13:36:14.416643 containerd[1649]: time="2026-01-14T13:36:14.416510493Z" level=info msg="connecting to shim 355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9" address="unix:///run/containerd/s/ca7e3530cf9216a391f9086f63533d854a04c10d5226ad2d48b01165e6b77827" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:14.418148 containerd[1649]: time="2026-01-14T13:36:14.418082399Z" level=info msg="connecting to shim 195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956" address="unix:///run/containerd/s/7c63c1952a6c7e8b6bd4cca470fca899eb498e7c1581cb254d517b7ef1df181a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:14.430526 containerd[1649]: time="2026-01-14T13:36:14.430467674Z" level=info msg="connecting to shim 4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271" address="unix:///run/containerd/s/3ded87be87a4158fcc1d95ca863bab853b91fafc5bb94273d99951b5308a79a0" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:14.542849 systemd[1]: Started cri-containerd-355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9.scope - libcontainer container 355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9. Jan 14 13:36:14.545966 systemd[1]: Started cri-containerd-4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271.scope - libcontainer container 4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271. Jan 14 13:36:14.553998 systemd[1]: Started cri-containerd-195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956.scope - libcontainer container 195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956. Jan 14 13:36:14.578000 audit: BPF prog-id=86 op=LOAD Jan 14 13:36:14.579000 audit: BPF prog-id=87 op=LOAD Jan 14 13:36:14.579000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.580000 audit: BPF prog-id=87 op=UNLOAD Jan 14 13:36:14.580000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.580000 audit: BPF prog-id=88 op=LOAD Jan 14 13:36:14.580000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.581000 audit: BPF prog-id=89 op=LOAD Jan 14 13:36:14.582000 audit: BPF prog-id=90 op=LOAD Jan 14 13:36:14.582000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.582000 audit: BPF prog-id=90 op=UNLOAD Jan 14 13:36:14.582000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.582000 audit: BPF prog-id=88 op=UNLOAD Jan 14 13:36:14.582000 audit[2674]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.582000 audit: BPF prog-id=91 op=LOAD Jan 14 13:36:14.582000 audit[2674]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2641 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.582000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335356162393166336438373939306639653365373634653462363937 Jan 14 13:36:14.588000 audit: BPF prog-id=92 op=LOAD Jan 14 13:36:14.588000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.589000 audit: BPF prog-id=92 op=UNLOAD Jan 14 13:36:14.589000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.589000 audit: BPF prog-id=93 op=LOAD Jan 14 13:36:14.589000 audit: BPF prog-id=94 op=LOAD Jan 14 13:36:14.589000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.592000 audit: BPF prog-id=95 op=LOAD Jan 14 13:36:14.592000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.592000 audit: BPF prog-id=95 op=UNLOAD Jan 14 13:36:14.592000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.593000 audit: BPF prog-id=96 op=LOAD Jan 14 13:36:14.593000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.593000 audit: BPF prog-id=97 op=LOAD Jan 14 13:36:14.593000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.594000 audit: BPF prog-id=97 op=UNLOAD Jan 14 13:36:14.594000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.594000 audit: BPF prog-id=96 op=UNLOAD Jan 14 13:36:14.594000 audit[2672]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.594000 audit: BPF prog-id=98 op=LOAD Jan 14 13:36:14.594000 audit[2672]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2639 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465393862626632343131306433633636643866383264653965336438 Jan 14 13:36:14.589000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.599000 audit: BPF prog-id=99 op=LOAD Jan 14 13:36:14.599000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.599000 audit: BPF prog-id=99 op=UNLOAD Jan 14 13:36:14.599000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.599000 audit: BPF prog-id=94 op=UNLOAD Jan 14 13:36:14.599000 audit[2671]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.599000 audit: BPF prog-id=100 op=LOAD Jan 14 13:36:14.599000 audit[2671]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2636 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139353235396332643432613130396631653466663465323863346130 Jan 14 13:36:14.606652 kubelet[2577]: I0114 13:36:14.595804 2577 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:14.606652 kubelet[2577]: E0114 13:36:14.597069 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.6:6443/api/v1/nodes\": dial tcp 10.230.49.6:6443: connect: connection refused" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:14.653303 containerd[1649]: time="2026-01-14T13:36:14.653247350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-414dr.gb1.brightbox.com,Uid:947cbb413863a46c4597b86dc5b28a71,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271\"" Jan 14 13:36:14.660373 containerd[1649]: time="2026-01-14T13:36:14.660328111Z" level=info msg="CreateContainer within sandbox \"4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 13:36:14.673100 containerd[1649]: time="2026-01-14T13:36:14.672999681Z" level=info msg="Container afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:14.684090 kubelet[2577]: W0114 13:36:14.683769 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.49.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-414dr.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:14.684090 kubelet[2577]: E0114 13:36:14.683859 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.49.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-414dr.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:14.687473 containerd[1649]: time="2026-01-14T13:36:14.687384674Z" level=info msg="CreateContainer within sandbox \"4e98bbf24110d3c66d8f82de9e3d8035e1d37c92a9d0795dd0d5fa07c1fb5271\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669\"" Jan 14 13:36:14.688806 containerd[1649]: time="2026-01-14T13:36:14.688763152Z" level=info msg="StartContainer for \"afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669\"" Jan 14 13:36:14.692039 containerd[1649]: time="2026-01-14T13:36:14.691147898Z" level=info msg="connecting to shim afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669" address="unix:///run/containerd/s/3ded87be87a4158fcc1d95ca863bab853b91fafc5bb94273d99951b5308a79a0" protocol=ttrpc version=3 Jan 14 13:36:14.720757 containerd[1649]: time="2026-01-14T13:36:14.720523617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-414dr.gb1.brightbox.com,Uid:337947e7d6022e4195bbfe1639fcc305,Namespace:kube-system,Attempt:0,} returns sandbox id \"195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956\"" Jan 14 13:36:14.722645 containerd[1649]: time="2026-01-14T13:36:14.722475775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-414dr.gb1.brightbox.com,Uid:55193cb343c8b2562b8435b678e1e435,Namespace:kube-system,Attempt:0,} returns sandbox id \"355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9\"" Jan 14 13:36:14.732853 containerd[1649]: time="2026-01-14T13:36:14.732812645Z" level=info msg="CreateContainer within sandbox \"355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 13:36:14.734586 containerd[1649]: time="2026-01-14T13:36:14.734051184Z" level=info msg="CreateContainer within sandbox \"195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 13:36:14.734888 systemd[1]: Started cri-containerd-afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669.scope - libcontainer container afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669. Jan 14 13:36:14.746196 containerd[1649]: time="2026-01-14T13:36:14.746098816Z" level=info msg="Container f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:14.756239 containerd[1649]: time="2026-01-14T13:36:14.756055928Z" level=info msg="Container 071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:14.761655 containerd[1649]: time="2026-01-14T13:36:14.761614704Z" level=info msg="CreateContainer within sandbox \"355ab91f3d87990f9e3e764e4b69777eeb81bec53e5c8be047bd4eb90d1912f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f\"" Jan 14 13:36:14.765721 containerd[1649]: time="2026-01-14T13:36:14.763511496Z" level=info msg="StartContainer for \"f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f\"" Jan 14 13:36:14.766105 containerd[1649]: time="2026-01-14T13:36:14.766068295Z" level=info msg="CreateContainer within sandbox \"195259c2d42a109f1e4ff4e28c4a0384f5f3876874ddbb1441368d43a0f16956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7\"" Jan 14 13:36:14.767034 containerd[1649]: time="2026-01-14T13:36:14.766996223Z" level=info msg="StartContainer for \"071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7\"" Jan 14 13:36:14.769321 containerd[1649]: time="2026-01-14T13:36:14.768701993Z" level=info msg="connecting to shim 071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7" address="unix:///run/containerd/s/7c63c1952a6c7e8b6bd4cca470fca899eb498e7c1581cb254d517b7ef1df181a" protocol=ttrpc version=3 Jan 14 13:36:14.769980 containerd[1649]: time="2026-01-14T13:36:14.769899489Z" level=info msg="connecting to shim f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f" address="unix:///run/containerd/s/ca7e3530cf9216a391f9086f63533d854a04c10d5226ad2d48b01165e6b77827" protocol=ttrpc version=3 Jan 14 13:36:14.777000 audit: BPF prog-id=101 op=LOAD Jan 14 13:36:14.779760 kernel: kauditd_printk_skb: 140 callbacks suppressed Jan 14 13:36:14.779896 kernel: audit: type=1334 audit(1768397774.777:374): prog-id=101 op=LOAD Jan 14 13:36:14.780000 audit: BPF prog-id=102 op=LOAD Jan 14 13:36:14.784616 kernel: audit: type=1334 audit(1768397774.780:375): prog-id=102 op=LOAD Jan 14 13:36:14.784707 kernel: audit: type=1300 audit(1768397774.780:375): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.780000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.786075 kubelet[2577]: W0114 13:36:14.785901 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.49.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:14.786075 kubelet[2577]: E0114 13:36:14.786041 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.49.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:14.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.780000 audit: BPF prog-id=102 op=UNLOAD Jan 14 13:36:14.799010 kernel: audit: type=1327 audit(1768397774.780:375): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.799084 kernel: audit: type=1334 audit(1768397774.780:376): prog-id=102 op=UNLOAD Jan 14 13:36:14.780000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.803463 kernel: audit: type=1300 audit(1768397774.780:376): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.780000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.807298 kernel: audit: type=1327 audit(1768397774.780:376): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.812626 kernel: audit: type=1334 audit(1768397774.781:377): prog-id=103 op=LOAD Jan 14 13:36:14.812701 kernel: audit: type=1300 audit(1768397774.781:377): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.781000 audit: BPF prog-id=103 op=LOAD Jan 14 13:36:14.781000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.821866 kubelet[2577]: W0114 13:36:14.820165 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.49.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:14.822062 kubelet[2577]: E0114 13:36:14.822033 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.49.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:14.823602 kernel: audit: type=1327 audit(1768397774.781:377): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.781000 audit: BPF prog-id=104 op=LOAD Jan 14 13:36:14.781000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.781000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.782000 audit: BPF prog-id=104 op=UNLOAD Jan 14 13:36:14.782000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.782000 audit: BPF prog-id=103 op=UNLOAD Jan 14 13:36:14.782000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.782000 audit: BPF prog-id=105 op=LOAD Jan 14 13:36:14.782000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2639 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166646464633533313537363339383530633563353639343636313864 Jan 14 13:36:14.842979 systemd[1]: Started cri-containerd-f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f.scope - libcontainer container f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f. Jan 14 13:36:14.855764 systemd[1]: Started cri-containerd-071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7.scope - libcontainer container 071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7. Jan 14 13:36:14.885000 audit: BPF prog-id=106 op=LOAD Jan 14 13:36:14.888000 audit: BPF prog-id=107 op=LOAD Jan 14 13:36:14.888000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.888000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.889000 audit: BPF prog-id=107 op=UNLOAD Jan 14 13:36:14.889000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.889000 audit: BPF prog-id=108 op=LOAD Jan 14 13:36:14.889000 audit: BPF prog-id=109 op=LOAD Jan 14 13:36:14.889000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.890000 audit: BPF prog-id=110 op=LOAD Jan 14 13:36:14.890000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.891000 audit: BPF prog-id=110 op=UNLOAD Jan 14 13:36:14.891000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.891000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.892000 audit: BPF prog-id=109 op=UNLOAD Jan 14 13:36:14.892000 audit[2776]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.892000 audit: BPF prog-id=111 op=LOAD Jan 14 13:36:14.892000 audit[2776]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2636 pid=2776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037316635633766366237393362346630313638356532616365643934 Jan 14 13:36:14.893000 audit: BPF prog-id=112 op=LOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=112 op=UNLOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=113 op=LOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=114 op=LOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=114 op=UNLOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=113 op=UNLOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.893000 audit: BPF prog-id=115 op=LOAD Jan 14 13:36:14.893000 audit[2777]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2641 pid=2777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:14.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343337666633666635333935393763643164396165613562363166 Jan 14 13:36:14.921947 containerd[1649]: time="2026-01-14T13:36:14.921856402Z" level=info msg="StartContainer for \"afdddc53157639850c5c56946618dfa88c3c824a0320f78c86ca4e1dd8880669\" returns successfully" Jan 14 13:36:14.983243 containerd[1649]: time="2026-01-14T13:36:14.982625575Z" level=info msg="StartContainer for \"f3437ff3ff539597cd1d9aea5b61f4c06a02ea484cbaaeb84fc8fde499d36f7f\" returns successfully" Jan 14 13:36:14.992482 containerd[1649]: time="2026-01-14T13:36:14.992432080Z" level=info msg="StartContainer for \"071f5c7f6b793b4f01685e2aced9422acc7a3cac51944362c1411c3bffe018c7\" returns successfully" Jan 14 13:36:15.155283 kubelet[2577]: W0114 13:36:15.155172 2577 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.49.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.49.6:6443: connect: connection refused Jan 14 13:36:15.155283 kubelet[2577]: E0114 13:36:15.155234 2577 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.49.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.49.6:6443: connect: connection refused" logger="UnhandledError" Jan 14 13:36:15.157739 kubelet[2577]: E0114 13:36:15.157693 2577 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.49.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-414dr.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.49.6:6443: connect: connection refused" interval="1.6s" Jan 14 13:36:15.401322 kubelet[2577]: I0114 13:36:15.401278 2577 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:15.402805 kubelet[2577]: E0114 13:36:15.402770 2577 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.49.6:6443/api/v1/nodes\": dial tcp 10.230.49.6:6443: connect: connection refused" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:15.867626 kubelet[2577]: E0114 13:36:15.866060 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:15.873789 kubelet[2577]: E0114 13:36:15.871452 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:15.879604 kubelet[2577]: E0114 13:36:15.879101 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:16.884943 kubelet[2577]: E0114 13:36:16.884681 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:16.885785 kubelet[2577]: E0114 13:36:16.885764 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:16.886679 kubelet[2577]: E0114 13:36:16.886171 2577 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.007668 kubelet[2577]: I0114 13:36:17.007632 2577 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.503952 kubelet[2577]: E0114 13:36:17.503895 2577 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-414dr.gb1.brightbox.com\" not found" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.599389 kubelet[2577]: I0114 13:36:17.597473 2577 kubelet_node_status.go:78] "Successfully registered node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.599869 kubelet[2577]: E0114 13:36:17.597554 2577 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-414dr.gb1.brightbox.com\": node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:17.638931 kubelet[2577]: E0114 13:36:17.638864 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:17.739602 kubelet[2577]: E0114 13:36:17.739087 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:17.840550 kubelet[2577]: E0114 13:36:17.840370 2577 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-414dr.gb1.brightbox.com\" not found" Jan 14 13:36:17.882718 kubelet[2577]: I0114 13:36:17.882678 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.883118 kubelet[2577]: I0114 13:36:17.883063 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.890736 kubelet[2577]: E0114 13:36:17.890660 2577 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-414dr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.891650 kubelet[2577]: E0114 13:36:17.890819 2577 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-414dr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.956175 kubelet[2577]: I0114 13:36:17.955634 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.958435 kubelet[2577]: E0114 13:36:17.958390 2577 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-414dr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.958435 kubelet[2577]: I0114 13:36:17.958434 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.961015 kubelet[2577]: E0114 13:36:17.960780 2577 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.961015 kubelet[2577]: I0114 13:36:17.960809 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:17.962450 kubelet[2577]: E0114 13:36:17.962421 2577 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-414dr.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:18.718590 kubelet[2577]: I0114 13:36:18.718195 2577 apiserver.go:52] "Watching apiserver" Jan 14 13:36:18.754403 kubelet[2577]: I0114 13:36:18.754356 2577 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:36:19.236098 kubelet[2577]: I0114 13:36:19.236065 2577 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:19.245136 kubelet[2577]: W0114 13:36:19.244930 2577 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:19.544548 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-12.scope)... Jan 14 13:36:19.544606 systemd[1]: Reloading... Jan 14 13:36:19.662594 zram_generator::config[2898]: No configuration found. Jan 14 13:36:20.036904 systemd[1]: Reloading finished in 491 ms. Jan 14 13:36:20.071799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:20.088142 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 13:36:20.088737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:20.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:20.092428 kernel: kauditd_printk_skb: 56 callbacks suppressed Jan 14 13:36:20.092494 kernel: audit: type=1131 audit(1768397780.087:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:20.097132 systemd[1]: kubelet.service: Consumed 1.196s CPU time, 128.2M memory peak. Jan 14 13:36:20.101673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 13:36:20.107360 kernel: audit: type=1334 audit(1768397780.102:399): prog-id=116 op=LOAD Jan 14 13:36:20.107449 kernel: audit: type=1334 audit(1768397780.102:400): prog-id=73 op=UNLOAD Jan 14 13:36:20.102000 audit: BPF prog-id=116 op=LOAD Jan 14 13:36:20.102000 audit: BPF prog-id=73 op=UNLOAD Jan 14 13:36:20.102000 audit: BPF prog-id=117 op=LOAD Jan 14 13:36:20.111613 kernel: audit: type=1334 audit(1768397780.102:401): prog-id=117 op=LOAD Jan 14 13:36:20.102000 audit: BPF prog-id=72 op=UNLOAD Jan 14 13:36:20.113589 kernel: audit: type=1334 audit(1768397780.102:402): prog-id=72 op=UNLOAD Jan 14 13:36:20.104000 audit: BPF prog-id=118 op=LOAD Jan 14 13:36:20.104000 audit: BPF prog-id=66 op=UNLOAD Jan 14 13:36:20.116978 kernel: audit: type=1334 audit(1768397780.104:403): prog-id=118 op=LOAD Jan 14 13:36:20.117047 kernel: audit: type=1334 audit(1768397780.104:404): prog-id=66 op=UNLOAD Jan 14 13:36:20.105000 audit: BPF prog-id=119 op=LOAD Jan 14 13:36:20.119222 kernel: audit: type=1334 audit(1768397780.105:405): prog-id=119 op=LOAD Jan 14 13:36:20.119292 kernel: audit: type=1334 audit(1768397780.105:406): prog-id=120 op=LOAD Jan 14 13:36:20.105000 audit: BPF prog-id=120 op=LOAD Jan 14 13:36:20.105000 audit: BPF prog-id=67 op=UNLOAD Jan 14 13:36:20.120991 kernel: audit: type=1334 audit(1768397780.105:407): prog-id=67 op=UNLOAD Jan 14 13:36:20.105000 audit: BPF prog-id=68 op=UNLOAD Jan 14 13:36:20.106000 audit: BPF prog-id=121 op=LOAD Jan 14 13:36:20.106000 audit: BPF prog-id=83 op=UNLOAD Jan 14 13:36:20.107000 audit: BPF prog-id=122 op=LOAD Jan 14 13:36:20.107000 audit: BPF prog-id=123 op=LOAD Jan 14 13:36:20.107000 audit: BPF prog-id=84 op=UNLOAD Jan 14 13:36:20.107000 audit: BPF prog-id=85 op=UNLOAD Jan 14 13:36:20.108000 audit: BPF prog-id=124 op=LOAD Jan 14 13:36:20.108000 audit: BPF prog-id=125 op=LOAD Jan 14 13:36:20.108000 audit: BPF prog-id=81 op=UNLOAD Jan 14 13:36:20.108000 audit: BPF prog-id=82 op=UNLOAD Jan 14 13:36:20.109000 audit: BPF prog-id=126 op=LOAD Jan 14 13:36:20.109000 audit: BPF prog-id=74 op=UNLOAD Jan 14 13:36:20.109000 audit: BPF prog-id=127 op=LOAD Jan 14 13:36:20.109000 audit: BPF prog-id=128 op=LOAD Jan 14 13:36:20.109000 audit: BPF prog-id=75 op=UNLOAD Jan 14 13:36:20.109000 audit: BPF prog-id=76 op=UNLOAD Jan 14 13:36:20.111000 audit: BPF prog-id=129 op=LOAD Jan 14 13:36:20.111000 audit: BPF prog-id=80 op=UNLOAD Jan 14 13:36:20.112000 audit: BPF prog-id=130 op=LOAD Jan 14 13:36:20.112000 audit: BPF prog-id=77 op=UNLOAD Jan 14 13:36:20.112000 audit: BPF prog-id=131 op=LOAD Jan 14 13:36:20.112000 audit: BPF prog-id=132 op=LOAD Jan 14 13:36:20.112000 audit: BPF prog-id=78 op=UNLOAD Jan 14 13:36:20.112000 audit: BPF prog-id=79 op=UNLOAD Jan 14 13:36:20.114000 audit: BPF prog-id=133 op=LOAD Jan 14 13:36:20.114000 audit: BPF prog-id=69 op=UNLOAD Jan 14 13:36:20.114000 audit: BPF prog-id=134 op=LOAD Jan 14 13:36:20.120000 audit: BPF prog-id=135 op=LOAD Jan 14 13:36:20.120000 audit: BPF prog-id=70 op=UNLOAD Jan 14 13:36:20.120000 audit: BPF prog-id=71 op=UNLOAD Jan 14 13:36:20.121000 audit: BPF prog-id=136 op=LOAD Jan 14 13:36:20.121000 audit: BPF prog-id=65 op=UNLOAD Jan 14 13:36:20.404278 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 13:36:20.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:20.415983 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 13:36:20.508684 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:20.509740 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 13:36:20.509740 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 13:36:20.509740 kubelet[2966]: I0114 13:36:20.509325 2966 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 13:36:20.519427 kubelet[2966]: I0114 13:36:20.519390 2966 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 13:36:20.519610 kubelet[2966]: I0114 13:36:20.519590 2966 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 13:36:20.520057 kubelet[2966]: I0114 13:36:20.520035 2966 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 13:36:20.528693 kubelet[2966]: I0114 13:36:20.528660 2966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 13:36:20.533620 kubelet[2966]: I0114 13:36:20.533573 2966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 13:36:20.545309 kubelet[2966]: I0114 13:36:20.545031 2966 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 13:36:20.551746 kubelet[2966]: I0114 13:36:20.551715 2966 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 13:36:20.552583 kubelet[2966]: I0114 13:36:20.552120 2966 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 13:36:20.552583 kubelet[2966]: I0114 13:36:20.552158 2966 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-414dr.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 13:36:20.552836 kubelet[2966]: I0114 13:36:20.552614 2966 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 13:36:20.552836 kubelet[2966]: I0114 13:36:20.552690 2966 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 13:36:20.554080 kubelet[2966]: I0114 13:36:20.552844 2966 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:20.554080 kubelet[2966]: I0114 13:36:20.553211 2966 kubelet.go:446] "Attempting to sync node with API server" Jan 14 13:36:20.554080 kubelet[2966]: I0114 13:36:20.553254 2966 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 13:36:20.554080 kubelet[2966]: I0114 13:36:20.553299 2966 kubelet.go:352] "Adding apiserver pod source" Jan 14 13:36:20.554080 kubelet[2966]: I0114 13:36:20.553350 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 13:36:20.562416 kubelet[2966]: I0114 13:36:20.562277 2966 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 13:36:20.563355 kubelet[2966]: I0114 13:36:20.563096 2966 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 13:36:20.568258 kubelet[2966]: I0114 13:36:20.567941 2966 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 13:36:20.568258 kubelet[2966]: I0114 13:36:20.568005 2966 server.go:1287] "Started kubelet" Jan 14 13:36:20.583123 kubelet[2966]: I0114 13:36:20.582309 2966 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 13:36:20.584664 kubelet[2966]: I0114 13:36:20.583554 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 13:36:20.584664 kubelet[2966]: I0114 13:36:20.584218 2966 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 13:36:20.584664 kubelet[2966]: I0114 13:36:20.584395 2966 server.go:479] "Adding debug handlers to kubelet server" Jan 14 13:36:20.590123 kubelet[2966]: I0114 13:36:20.590056 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 13:36:20.602440 kubelet[2966]: I0114 13:36:20.601994 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 13:36:20.607189 kubelet[2966]: I0114 13:36:20.607159 2966 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 13:36:20.611839 kubelet[2966]: I0114 13:36:20.611795 2966 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 13:36:20.612100 kubelet[2966]: I0114 13:36:20.612081 2966 reconciler.go:26] "Reconciler: start to sync state" Jan 14 13:36:20.618073 kubelet[2966]: E0114 13:36:20.618041 2966 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 13:36:20.622779 kubelet[2966]: I0114 13:36:20.622721 2966 factory.go:221] Registration of the systemd container factory successfully Jan 14 13:36:20.628153 kubelet[2966]: I0114 13:36:20.627681 2966 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 13:36:20.630050 kubelet[2966]: I0114 13:36:20.625276 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 13:36:20.637612 kubelet[2966]: I0114 13:36:20.636730 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 13:36:20.637612 kubelet[2966]: I0114 13:36:20.636806 2966 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 13:36:20.637612 kubelet[2966]: I0114 13:36:20.637536 2966 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 13:36:20.637841 kubelet[2966]: I0114 13:36:20.637702 2966 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 13:36:20.638732 kubelet[2966]: E0114 13:36:20.638694 2966 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 13:36:20.644824 kubelet[2966]: I0114 13:36:20.644800 2966 factory.go:221] Registration of the containerd container factory successfully Jan 14 13:36:20.730481 kubelet[2966]: I0114 13:36:20.729667 2966 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 13:36:20.730790 kubelet[2966]: I0114 13:36:20.730722 2966 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 13:36:20.731655 kubelet[2966]: I0114 13:36:20.731610 2966 state_mem.go:36] "Initialized new in-memory state store" Jan 14 13:36:20.732309 kubelet[2966]: I0114 13:36:20.732254 2966 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 13:36:20.732415 kubelet[2966]: I0114 13:36:20.732281 2966 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 13:36:20.732623 kubelet[2966]: I0114 13:36:20.732497 2966 policy_none.go:49] "None policy: Start" Jan 14 13:36:20.732802 kubelet[2966]: I0114 13:36:20.732534 2966 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 13:36:20.732802 kubelet[2966]: I0114 13:36:20.732771 2966 state_mem.go:35] "Initializing new in-memory state store" Jan 14 13:36:20.733277 kubelet[2966]: I0114 13:36:20.733205 2966 state_mem.go:75] "Updated machine memory state" Jan 14 13:36:20.739663 kubelet[2966]: E0114 13:36:20.739629 2966 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 13:36:20.743119 kubelet[2966]: I0114 13:36:20.743094 2966 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 13:36:20.746456 kubelet[2966]: I0114 13:36:20.746434 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 13:36:20.746981 kubelet[2966]: I0114 13:36:20.746909 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 13:36:20.749732 kubelet[2966]: I0114 13:36:20.749072 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 13:36:20.755405 kubelet[2966]: E0114 13:36:20.755374 2966 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 13:36:20.873240 kubelet[2966]: I0114 13:36:20.873155 2966 kubelet_node_status.go:75] "Attempting to register node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.883598 kubelet[2966]: I0114 13:36:20.883235 2966 kubelet_node_status.go:124] "Node was previously registered" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.883882 kubelet[2966]: I0114 13:36:20.883864 2966 kubelet_node_status.go:78] "Successfully registered node" node="srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.945595 kubelet[2966]: I0114 13:36:20.944704 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.947606 kubelet[2966]: I0114 13:36:20.947193 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.949912 kubelet[2966]: I0114 13:36:20.948016 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:20.976442 kubelet[2966]: W0114 13:36:20.976401 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:20.981676 kubelet[2966]: W0114 13:36:20.981515 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:20.985966 kubelet[2966]: W0114 13:36:20.985834 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:20.985966 kubelet[2966]: E0114 13:36:20.985898 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-414dr.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.014345 kubelet[2966]: I0114 13:36:21.014253 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-ca-certs\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.114877 kubelet[2966]: I0114 13:36:21.114807 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-ca-certs\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.114877 kubelet[2966]: I0114 13:36:21.114867 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-k8s-certs\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115109 kubelet[2966]: I0114 13:36:21.114902 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115109 kubelet[2966]: I0114 13:36:21.114934 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/947cbb413863a46c4597b86dc5b28a71-kubeconfig\") pod \"kube-scheduler-srv-414dr.gb1.brightbox.com\" (UID: \"947cbb413863a46c4597b86dc5b28a71\") " pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115109 kubelet[2966]: I0114 13:36:21.114959 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-k8s-certs\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115109 kubelet[2966]: I0114 13:36:21.114987 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/337947e7d6022e4195bbfe1639fcc305-usr-share-ca-certificates\") pod \"kube-apiserver-srv-414dr.gb1.brightbox.com\" (UID: \"337947e7d6022e4195bbfe1639fcc305\") " pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115109 kubelet[2966]: I0114 13:36:21.115030 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-flexvolume-dir\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.115336 kubelet[2966]: I0114 13:36:21.115060 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55193cb343c8b2562b8435b678e1e435-kubeconfig\") pod \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" (UID: \"55193cb343c8b2562b8435b678e1e435\") " pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.559607 kubelet[2966]: I0114 13:36:21.559270 2966 apiserver.go:52] "Watching apiserver" Jan 14 13:36:21.613849 kubelet[2966]: I0114 13:36:21.613749 2966 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 13:36:21.696612 kubelet[2966]: I0114 13:36:21.695011 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.696612 kubelet[2966]: I0114 13:36:21.695604 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.698088 kubelet[2966]: I0114 13:36:21.698022 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.712647 kubelet[2966]: W0114 13:36:21.712601 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:21.712956 kubelet[2966]: E0114 13:36:21.712675 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-414dr.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.712956 kubelet[2966]: W0114 13:36:21.712953 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:21.713302 kubelet[2966]: E0114 13:36:21.712989 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-414dr.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.714111 kubelet[2966]: W0114 13:36:21.714073 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 13:36:21.714431 kubelet[2966]: E0114 13:36:21.714123 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-414dr.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" Jan 14 13:36:21.765034 kubelet[2966]: I0114 13:36:21.764799 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-414dr.gb1.brightbox.com" podStartSLOduration=2.764696938 podStartE2EDuration="2.764696938s" podCreationTimestamp="2026-01-14 13:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:36:21.747137985 +0000 UTC m=+1.321211590" watchObservedRunningTime="2026-01-14 13:36:21.764696938 +0000 UTC m=+1.338770520" Jan 14 13:36:21.779805 kubelet[2966]: I0114 13:36:21.779286 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-414dr.gb1.brightbox.com" podStartSLOduration=1.779267161 podStartE2EDuration="1.779267161s" podCreationTimestamp="2026-01-14 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:36:21.766309688 +0000 UTC m=+1.340383296" watchObservedRunningTime="2026-01-14 13:36:21.779267161 +0000 UTC m=+1.353340756" Jan 14 13:36:21.799197 kubelet[2966]: I0114 13:36:21.799126 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-414dr.gb1.brightbox.com" podStartSLOduration=1.799105062 podStartE2EDuration="1.799105062s" podCreationTimestamp="2026-01-14 13:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:36:21.780709742 +0000 UTC m=+1.354783322" watchObservedRunningTime="2026-01-14 13:36:21.799105062 +0000 UTC m=+1.373178655" Jan 14 13:36:25.620365 kubelet[2966]: I0114 13:36:25.619842 2966 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 13:36:25.621037 containerd[1649]: time="2026-01-14T13:36:25.620237738Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 13:36:25.621403 kubelet[2966]: I0114 13:36:25.620534 2966 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 13:36:26.539182 systemd[1]: Created slice kubepods-besteffort-podb7d2f5f2_6606_4e6e_9530_844f7ba0f30d.slice - libcontainer container kubepods-besteffort-podb7d2f5f2_6606_4e6e_9530_844f7ba0f30d.slice. Jan 14 13:36:26.551304 kubelet[2966]: I0114 13:36:26.551258 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcw4\" (UniqueName: \"kubernetes.io/projected/b7d2f5f2-6606-4e6e-9530-844f7ba0f30d-kube-api-access-wmcw4\") pod \"kube-proxy-qffnp\" (UID: \"b7d2f5f2-6606-4e6e-9530-844f7ba0f30d\") " pod="kube-system/kube-proxy-qffnp" Jan 14 13:36:26.552449 kubelet[2966]: I0114 13:36:26.551436 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b7d2f5f2-6606-4e6e-9530-844f7ba0f30d-kube-proxy\") pod \"kube-proxy-qffnp\" (UID: \"b7d2f5f2-6606-4e6e-9530-844f7ba0f30d\") " pod="kube-system/kube-proxy-qffnp" Jan 14 13:36:26.552449 kubelet[2966]: I0114 13:36:26.551505 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b7d2f5f2-6606-4e6e-9530-844f7ba0f30d-xtables-lock\") pod \"kube-proxy-qffnp\" (UID: \"b7d2f5f2-6606-4e6e-9530-844f7ba0f30d\") " pod="kube-system/kube-proxy-qffnp" Jan 14 13:36:26.552449 kubelet[2966]: I0114 13:36:26.551532 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b7d2f5f2-6606-4e6e-9530-844f7ba0f30d-lib-modules\") pod \"kube-proxy-qffnp\" (UID: \"b7d2f5f2-6606-4e6e-9530-844f7ba0f30d\") " pod="kube-system/kube-proxy-qffnp" Jan 14 13:36:26.717218 systemd[1]: Created slice kubepods-besteffort-pod33c2dc2d_86c5_464a_8833_eae612730c5a.slice - libcontainer container kubepods-besteffort-pod33c2dc2d_86c5_464a_8833_eae612730c5a.slice. Jan 14 13:36:26.752830 kubelet[2966]: I0114 13:36:26.752772 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrfpc\" (UniqueName: \"kubernetes.io/projected/33c2dc2d-86c5-464a-8833-eae612730c5a-kube-api-access-wrfpc\") pod \"tigera-operator-7dcd859c48-jk98s\" (UID: \"33c2dc2d-86c5-464a-8833-eae612730c5a\") " pod="tigera-operator/tigera-operator-7dcd859c48-jk98s" Jan 14 13:36:26.753511 kubelet[2966]: I0114 13:36:26.753430 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33c2dc2d-86c5-464a-8833-eae612730c5a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jk98s\" (UID: \"33c2dc2d-86c5-464a-8833-eae612730c5a\") " pod="tigera-operator/tigera-operator-7dcd859c48-jk98s" Jan 14 13:36:26.849138 containerd[1649]: time="2026-01-14T13:36:26.848804297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qffnp,Uid:b7d2f5f2-6606-4e6e-9530-844f7ba0f30d,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:26.890899 containerd[1649]: time="2026-01-14T13:36:26.890736907Z" level=info msg="connecting to shim 372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176" address="unix:///run/containerd/s/fa7896ed053c9616aa80c5501574cc6576c88f7049476e6ee6f5c05ca0ba19db" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:26.936843 systemd[1]: Started cri-containerd-372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176.scope - libcontainer container 372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176. Jan 14 13:36:26.956000 audit: BPF prog-id=137 op=LOAD Jan 14 13:36:26.958059 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 13:36:26.958140 kernel: audit: type=1334 audit(1768397786.956:442): prog-id=137 op=LOAD Jan 14 13:36:26.960000 audit: BPF prog-id=138 op=LOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.964010 kernel: audit: type=1334 audit(1768397786.960:443): prog-id=138 op=LOAD Jan 14 13:36:26.964072 kernel: audit: type=1300 audit(1768397786.960:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.969048 kernel: audit: type=1327 audit(1768397786.960:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=138 op=UNLOAD Jan 14 13:36:26.972690 kernel: audit: type=1334 audit(1768397786.960:444): prog-id=138 op=UNLOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.975392 kernel: audit: type=1300 audit(1768397786.960:444): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=139 op=LOAD Jan 14 13:36:26.985028 kernel: audit: type=1327 audit(1768397786.960:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.985084 kernel: audit: type=1334 audit(1768397786.960:445): prog-id=139 op=LOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.992597 kernel: audit: type=1300 audit(1768397786.960:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:27.001608 kernel: audit: type=1327 audit(1768397786.960:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=140 op=LOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=140 op=UNLOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=139 op=UNLOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:26.960000 audit: BPF prog-id=141 op=LOAD Jan 14 13:36:26.960000 audit[3033]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3022 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:26.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337326533376231643162393962633238396131373161613731653663 Jan 14 13:36:27.011529 containerd[1649]: time="2026-01-14T13:36:27.011405158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qffnp,Uid:b7d2f5f2-6606-4e6e-9530-844f7ba0f30d,Namespace:kube-system,Attempt:0,} returns sandbox id \"372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176\"" Jan 14 13:36:27.016940 containerd[1649]: time="2026-01-14T13:36:27.016906166Z" level=info msg="CreateContainer within sandbox \"372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 13:36:27.023687 containerd[1649]: time="2026-01-14T13:36:27.023654279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jk98s,Uid:33c2dc2d-86c5-464a-8833-eae612730c5a,Namespace:tigera-operator,Attempt:0,}" Jan 14 13:36:27.034836 containerd[1649]: time="2026-01-14T13:36:27.033504094Z" level=info msg="Container 3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:27.047697 containerd[1649]: time="2026-01-14T13:36:27.047654634Z" level=info msg="CreateContainer within sandbox \"372e37b1d1b99bc289a171aa71e6c5b47a2172e7d207a10961943b63544b8176\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425\"" Jan 14 13:36:27.049853 containerd[1649]: time="2026-01-14T13:36:27.049810209Z" level=info msg="StartContainer for \"3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425\"" Jan 14 13:36:27.052156 containerd[1649]: time="2026-01-14T13:36:27.052113986Z" level=info msg="connecting to shim 3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425" address="unix:///run/containerd/s/fa7896ed053c9616aa80c5501574cc6576c88f7049476e6ee6f5c05ca0ba19db" protocol=ttrpc version=3 Jan 14 13:36:27.064638 containerd[1649]: time="2026-01-14T13:36:27.064473040Z" level=info msg="connecting to shim b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653" address="unix:///run/containerd/s/b74ff31e5e2118dbec2b394bdc4f74fe7153ee478fc28b6296c76ca6a5ec9764" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:27.086819 systemd[1]: Started cri-containerd-3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425.scope - libcontainer container 3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425. Jan 14 13:36:27.123824 systemd[1]: Started cri-containerd-b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653.scope - libcontainer container b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653. Jan 14 13:36:27.147000 audit: BPF prog-id=142 op=LOAD Jan 14 13:36:27.148000 audit: BPF prog-id=143 op=LOAD Jan 14 13:36:27.148000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.148000 audit: BPF prog-id=143 op=UNLOAD Jan 14 13:36:27.148000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.149000 audit: BPF prog-id=144 op=LOAD Jan 14 13:36:27.149000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.149000 audit: BPF prog-id=145 op=LOAD Jan 14 13:36:27.149000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.149000 audit: BPF prog-id=145 op=UNLOAD Jan 14 13:36:27.149000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.149000 audit: BPF prog-id=144 op=UNLOAD Jan 14 13:36:27.149000 audit[3092]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.149000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.150000 audit: BPF prog-id=146 op=LOAD Jan 14 13:36:27.150000 audit[3092]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=3074 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.150000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234363033663630623564333933306536356231363439303037326237 Jan 14 13:36:27.153000 audit: BPF prog-id=147 op=LOAD Jan 14 13:36:27.153000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363643531613837616131363338623438383035626436316537636437 Jan 14 13:36:27.153000 audit: BPF prog-id=148 op=LOAD Jan 14 13:36:27.153000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363643531613837616131363338623438383035626436316537636437 Jan 14 13:36:27.154000 audit: BPF prog-id=148 op=UNLOAD Jan 14 13:36:27.154000 audit[3060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363643531613837616131363338623438383035626436316537636437 Jan 14 13:36:27.154000 audit: BPF prog-id=147 op=UNLOAD Jan 14 13:36:27.154000 audit[3060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363643531613837616131363338623438383035626436316537636437 Jan 14 13:36:27.154000 audit: BPF prog-id=149 op=LOAD Jan 14 13:36:27.154000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.154000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363643531613837616131363338623438383035626436316537636437 Jan 14 13:36:27.197252 containerd[1649]: time="2026-01-14T13:36:27.197203817Z" level=info msg="StartContainer for \"3cd51a87aa1638b48805bd61e7cd78a4b890e7891051ab93ef7177dddc70e425\" returns successfully" Jan 14 13:36:27.226162 containerd[1649]: time="2026-01-14T13:36:27.226022817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jk98s,Uid:33c2dc2d-86c5-464a-8833-eae612730c5a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653\"" Jan 14 13:36:27.229299 containerd[1649]: time="2026-01-14T13:36:27.229160723Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 13:36:27.684000 audit[3169]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.684000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff393aab60 a2=0 a3=7fff393aab4c items=0 ppid=3099 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:36:27.692185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4190591763.mount: Deactivated successfully. Jan 14 13:36:27.699000 audit[3171]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.699000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff7d3ab50 a2=0 a3=7ffff7d3ab3c items=0 ppid=3099 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.699000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:36:27.700000 audit[3172]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3172 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.700000 audit[3172]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd6770960 a2=0 a3=7fffd677094c items=0 ppid=3099 pid=3172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.700000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 13:36:27.703000 audit[3173]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.703000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbae46050 a2=0 a3=7ffdbae4603c items=0 ppid=3099 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:36:27.711000 audit[3174]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.711000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc2d031c30 a2=0 a3=7ffc2d031c1c items=0 ppid=3099 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.711000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 13:36:27.726000 audit[3175]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.726000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3b0acb40 a2=0 a3=7ffd3b0acb2c items=0 ppid=3099 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.726000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 13:36:27.741532 kubelet[2966]: I0114 13:36:27.741428 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qffnp" podStartSLOduration=1.7414077639999999 podStartE2EDuration="1.741407764s" podCreationTimestamp="2026-01-14 13:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:36:27.737527412 +0000 UTC m=+7.311601007" watchObservedRunningTime="2026-01-14 13:36:27.741407764 +0000 UTC m=+7.315481345" Jan 14 13:36:27.810000 audit[3176]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.810000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd02ba87a0 a2=0 a3=7ffd02ba878c items=0 ppid=3099 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:36:27.816000 audit[3178]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3178 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.816000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffeaa210040 a2=0 a3=7ffeaa21002c items=0 ppid=3099 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 13:36:27.822000 audit[3181]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.822000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc9ca9d7b0 a2=0 a3=7ffc9ca9d79c items=0 ppid=3099 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 13:36:27.824000 audit[3182]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.824000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff13b7fbb0 a2=0 a3=7fff13b7fb9c items=0 ppid=3099 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:36:27.828000 audit[3184]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.828000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcf24e98d0 a2=0 a3=7ffcf24e98bc items=0 ppid=3099 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:36:27.829000 audit[3185]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.829000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8e948900 a2=0 a3=7ffe8e9488ec items=0 ppid=3099 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:36:27.836000 audit[3187]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3187 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.836000 audit[3187]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd4e806310 a2=0 a3=7ffd4e8062fc items=0 ppid=3099 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:36:27.844000 audit[3190]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.844000 audit[3190]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcb7d36ea0 a2=0 a3=7ffcb7d36e8c items=0 ppid=3099 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.844000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 13:36:27.846000 audit[3191]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.846000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe01c24170 a2=0 a3=7ffe01c2415c items=0 ppid=3099 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:36:27.851000 audit[3193]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.851000 audit[3193]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd07b26560 a2=0 a3=7ffd07b2654c items=0 ppid=3099 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.851000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:36:27.853000 audit[3194]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.853000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcb342b0f0 a2=0 a3=7ffcb342b0dc items=0 ppid=3099 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:36:27.858000 audit[3196]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3196 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.858000 audit[3196]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdd18993c0 a2=0 a3=7ffdd18993ac items=0 ppid=3099 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:36:27.868000 audit[3199]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.868000 audit[3199]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc16283500 a2=0 a3=7ffc162834ec items=0 ppid=3099 pid=3199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.868000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:36:27.876000 audit[3202]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3202 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.876000 audit[3202]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff056d0380 a2=0 a3=7fff056d036c items=0 ppid=3099 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:36:27.878000 audit[3203]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.878000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd475df6c0 a2=0 a3=7ffd475df6ac items=0 ppid=3099 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:36:27.883000 audit[3205]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3205 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.883000 audit[3205]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff0e468970 a2=0 a3=7fff0e46895c items=0 ppid=3099 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.883000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:36:27.890000 audit[3208]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.890000 audit[3208]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff9e47e5d0 a2=0 a3=7fff9e47e5bc items=0 ppid=3099 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:36:27.892000 audit[3209]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.892000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf5bdc350 a2=0 a3=7ffcf5bdc33c items=0 ppid=3099 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:36:27.896000 audit[3211]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3211 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 13:36:27.896000 audit[3211]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffe6a0f5e20 a2=0 a3=7ffe6a0f5e0c items=0 ppid=3099 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.896000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:36:27.927000 audit[3217]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:27.927000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5d0a9a60 a2=0 a3=7fff5d0a9a4c items=0 ppid=3099 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:27.940000 audit[3217]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:27.940000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff5d0a9a60 a2=0 a3=7fff5d0a9a4c items=0 ppid=3099 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:27.945000 audit[3222]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.945000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff74896160 a2=0 a3=7fff7489614c items=0 ppid=3099 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.945000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 13:36:27.949000 audit[3224]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3224 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.949000 audit[3224]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffb796a200 a2=0 a3=7fffb796a1ec items=0 ppid=3099 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.949000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 13:36:27.956000 audit[3227]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.956000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc844b3a80 a2=0 a3=7ffc844b3a6c items=0 ppid=3099 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 13:36:27.958000 audit[3228]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.958000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffec4e8290 a2=0 a3=7fffec4e827c items=0 ppid=3099 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.958000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 13:36:27.962000 audit[3230]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.962000 audit[3230]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc96b855e0 a2=0 a3=7ffc96b855cc items=0 ppid=3099 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 13:36:27.965000 audit[3231]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.965000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc28fd40d0 a2=0 a3=7ffc28fd40bc items=0 ppid=3099 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 13:36:27.969000 audit[3233]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3233 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.969000 audit[3233]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc7e73a030 a2=0 a3=7ffc7e73a01c items=0 ppid=3099 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 13:36:27.975000 audit[3236]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3236 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.975000 audit[3236]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffe580d220 a2=0 a3=7fffe580d20c items=0 ppid=3099 pid=3236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.975000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 13:36:27.977000 audit[3237]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.977000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff45179d00 a2=0 a3=7fff45179cec items=0 ppid=3099 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.977000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 13:36:27.981000 audit[3239]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.981000 audit[3239]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffcadd17c0 a2=0 a3=7fffcadd17ac items=0 ppid=3099 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.981000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 13:36:27.983000 audit[3240]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.983000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffddb82ae50 a2=0 a3=7ffddb82ae3c items=0 ppid=3099 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.983000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 13:36:27.987000 audit[3242]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3242 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.987000 audit[3242]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffed0a179a0 a2=0 a3=7ffed0a1798c items=0 ppid=3099 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.987000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 13:36:27.994000 audit[3245]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:27.994000 audit[3245]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd36ddd2c0 a2=0 a3=7ffd36ddd2ac items=0 ppid=3099 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:27.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 13:36:28.001000 audit[3248]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.001000 audit[3248]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcf126ded0 a2=0 a3=7ffcf126debc items=0 ppid=3099 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.001000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 13:36:28.003000 audit[3249]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.003000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9c769d50 a2=0 a3=7ffc9c769d3c items=0 ppid=3099 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.003000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 13:36:28.007000 audit[3251]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.007000 audit[3251]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffded7b3190 a2=0 a3=7ffded7b317c items=0 ppid=3099 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.007000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:36:28.013000 audit[3254]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.013000 audit[3254]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff29c76670 a2=0 a3=7fff29c7665c items=0 ppid=3099 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.013000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 13:36:28.015000 audit[3255]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.015000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffda903ab80 a2=0 a3=7ffda903ab6c items=0 ppid=3099 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.015000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 13:36:28.019000 audit[3257]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3257 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.019000 audit[3257]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe06ffd9c0 a2=0 a3=7ffe06ffd9ac items=0 ppid=3099 pid=3257 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 13:36:28.021000 audit[3258]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.021000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd331fec90 a2=0 a3=7ffd331fec7c items=0 ppid=3099 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 13:36:28.024000 audit[3260]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.024000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc52699750 a2=0 a3=7ffc5269973c items=0 ppid=3099 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:36:28.030000 audit[3263]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 13:36:28.030000 audit[3263]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff35562420 a2=0 a3=7fff3556240c items=0 ppid=3099 pid=3263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.030000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 13:36:28.035000 audit[3265]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:36:28.035000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7fffd48b3d20 a2=0 a3=7fffd48b3d0c items=0 ppid=3099 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.035000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:28.036000 audit[3265]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 13:36:28.036000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fffd48b3d20 a2=0 a3=7fffd48b3d0c items=0 ppid=3099 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:28.036000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:29.430223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268396337.mount: Deactivated successfully. Jan 14 13:36:31.424403 containerd[1649]: time="2026-01-14T13:36:31.424326331Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:31.429533 containerd[1649]: time="2026-01-14T13:36:31.429157452Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 13:36:31.430429 containerd[1649]: time="2026-01-14T13:36:31.430359628Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:31.433139 containerd[1649]: time="2026-01-14T13:36:31.433110967Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:31.434486 containerd[1649]: time="2026-01-14T13:36:31.434445923Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.205181791s" Jan 14 13:36:31.434636 containerd[1649]: time="2026-01-14T13:36:31.434611039Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 13:36:31.439006 containerd[1649]: time="2026-01-14T13:36:31.438866788Z" level=info msg="CreateContainer within sandbox \"b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 13:36:31.458622 containerd[1649]: time="2026-01-14T13:36:31.456097929Z" level=info msg="Container 30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:31.464944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1735555662.mount: Deactivated successfully. Jan 14 13:36:31.485557 containerd[1649]: time="2026-01-14T13:36:31.485465644Z" level=info msg="CreateContainer within sandbox \"b4603f60b5d3930e65b16490072b79ab54d18586e6219b28f08e37451a52b653\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d\"" Jan 14 13:36:31.486955 containerd[1649]: time="2026-01-14T13:36:31.486462617Z" level=info msg="StartContainer for \"30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d\"" Jan 14 13:36:31.488297 containerd[1649]: time="2026-01-14T13:36:31.488266815Z" level=info msg="connecting to shim 30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d" address="unix:///run/containerd/s/b74ff31e5e2118dbec2b394bdc4f74fe7153ee478fc28b6296c76ca6a5ec9764" protocol=ttrpc version=3 Jan 14 13:36:31.523869 systemd[1]: Started cri-containerd-30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d.scope - libcontainer container 30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d. Jan 14 13:36:31.543000 audit: BPF prog-id=150 op=LOAD Jan 14 13:36:31.543000 audit: BPF prog-id=151 op=LOAD Jan 14 13:36:31.543000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.543000 audit: BPF prog-id=151 op=UNLOAD Jan 14 13:36:31.543000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.543000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.544000 audit: BPF prog-id=152 op=LOAD Jan 14 13:36:31.544000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.544000 audit: BPF prog-id=153 op=LOAD Jan 14 13:36:31.544000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.544000 audit: BPF prog-id=153 op=UNLOAD Jan 14 13:36:31.544000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.544000 audit: BPF prog-id=152 op=UNLOAD Jan 14 13:36:31.544000 audit[3276]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.544000 audit: BPF prog-id=154 op=LOAD Jan 14 13:36:31.544000 audit[3276]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3074 pid=3276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:31.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3330613536333132663463373765343161646261643438626538373232 Jan 14 13:36:31.574235 containerd[1649]: time="2026-01-14T13:36:31.574132175Z" level=info msg="StartContainer for \"30a56312f4c77e41adbad48be87224a7aaa86d53f0128bb9945b68681fa2248d\" returns successfully" Jan 14 13:36:37.494830 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 13:36:37.495063 kernel: audit: type=1325 audit(1768397797.482:522): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.482000 audit[3337]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.482000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc94bc9900 a2=0 a3=7ffc94bc98ec items=0 ppid=3099 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.504519 kernel: audit: type=1300 audit(1768397797.482:522): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc94bc9900 a2=0 a3=7ffc94bc98ec items=0 ppid=3099 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.508945 kernel: audit: type=1327 audit(1768397797.482:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.517717 kernel: audit: type=1325 audit(1768397797.502:523): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.502000 audit[3337]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3337 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.502000 audit[3337]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc94bc9900 a2=0 a3=0 items=0 ppid=3099 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.527763 kernel: audit: type=1300 audit(1768397797.502:523): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc94bc9900 a2=0 a3=0 items=0 ppid=3099 pid=3337 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.530650 kernel: audit: type=1327 audit(1768397797.502:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.565000 audit[3339]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.570770 kernel: audit: type=1325 audit(1768397797.565:524): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.565000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc20dc4ed0 a2=0 a3=7ffc20dc4ebc items=0 ppid=3099 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.578587 kernel: audit: type=1300 audit(1768397797.565:524): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc20dc4ed0 a2=0 a3=7ffc20dc4ebc items=0 ppid=3099 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.588634 kernel: audit: type=1327 audit(1768397797.565:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:37.594000 audit[3339]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.599683 kernel: audit: type=1325 audit(1768397797.594:525): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3339 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:37.594000 audit[3339]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc20dc4ed0 a2=0 a3=0 items=0 ppid=3099 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:37.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:38.918459 sudo[1952]: pam_unix(sudo:session): session closed for user root Jan 14 13:36:38.919000 audit[1952]: USER_END pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:36:38.919000 audit[1952]: CRED_DISP pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 13:36:39.018595 sshd[1951]: Connection closed by 68.220.241.50 port 35950 Jan 14 13:36:39.019506 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Jan 14 13:36:39.023000 audit[1947]: USER_END pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:36:39.024000 audit[1947]: CRED_DISP pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:36:39.029351 systemd-logind[1615]: Session 12 logged out. Waiting for processes to exit. Jan 14 13:36:39.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.49.6:22-68.220.241.50:35950 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:36:39.029581 systemd[1]: sshd@8-10.230.49.6:22-68.220.241.50:35950.service: Deactivated successfully. Jan 14 13:36:39.035891 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 13:36:39.037910 systemd[1]: session-12.scope: Consumed 6.655s CPU time, 151.7M memory peak. Jan 14 13:36:39.047139 systemd-logind[1615]: Removed session 12. Jan 14 13:36:43.067008 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 13:36:43.067242 kernel: audit: type=1325 audit(1768397803.060:531): table=filter:109 family=2 entries=16 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.060000 audit[3361]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.075162 kernel: audit: type=1300 audit(1768397803.060:531): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb2690440 a2=0 a3=7ffeb269042c items=0 ppid=3099 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.060000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb2690440 a2=0 a3=7ffeb269042c items=0 ppid=3099 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.060000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.081639 kernel: audit: type=1327 audit(1768397803.060:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.087000 audit[3361]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.094779 kernel: audit: type=1325 audit(1768397803.087:532): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.087000 audit[3361]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb2690440 a2=0 a3=0 items=0 ppid=3099 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.101605 kernel: audit: type=1300 audit(1768397803.087:532): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb2690440 a2=0 a3=0 items=0 ppid=3099 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.087000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.105612 kernel: audit: type=1327 audit(1768397803.087:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.119000 audit[3363]: NETFILTER_CFG table=filter:111 family=2 entries=17 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.123606 kernel: audit: type=1325 audit(1768397803.119:533): table=filter:111 family=2 entries=17 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.119000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe4b7e6250 a2=0 a3=7ffe4b7e623c items=0 ppid=3099 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.131603 kernel: audit: type=1300 audit(1768397803.119:533): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe4b7e6250 a2=0 a3=7ffe4b7e623c items=0 ppid=3099 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.119000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.132000 audit[3363]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.135875 kernel: audit: type=1327 audit(1768397803.119:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:43.135986 kernel: audit: type=1325 audit(1768397803.132:534): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:43.132000 audit[3363]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4b7e6250 a2=0 a3=0 items=0 ppid=3099 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:43.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:44.161000 audit[3365]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:44.161000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe94801bd0 a2=0 a3=7ffe94801bbc items=0 ppid=3099 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:44.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:44.164000 audit[3365]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:44.164000 audit[3365]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe94801bd0 a2=0 a3=0 items=0 ppid=3099 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:44.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:45.004531 kubelet[2966]: I0114 13:36:45.002885 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jk98s" podStartSLOduration=14.794297121 podStartE2EDuration="19.002852602s" podCreationTimestamp="2026-01-14 13:36:26 +0000 UTC" firstStartedPulling="2026-01-14 13:36:27.227541938 +0000 UTC m=+6.801615512" lastFinishedPulling="2026-01-14 13:36:31.436097419 +0000 UTC m=+11.010170993" observedRunningTime="2026-01-14 13:36:31.742898461 +0000 UTC m=+11.316972059" watchObservedRunningTime="2026-01-14 13:36:45.002852602 +0000 UTC m=+24.576926197" Jan 14 13:36:45.013727 kubelet[2966]: I0114 13:36:45.013501 2966 status_manager.go:890] "Failed to get status for pod" podUID="1c64e33b-9783-407b-a343-6cdb99d96d98" pod="calico-system/calico-typha-64479c4d78-q6rk7" err="pods \"calico-typha-64479c4d78-q6rk7\" is forbidden: User \"system:node:srv-414dr.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-414dr.gb1.brightbox.com' and this object" Jan 14 13:36:45.013727 kubelet[2966]: W0114 13:36:45.013637 2966 reflector.go:569] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:srv-414dr.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-414dr.gb1.brightbox.com' and this object Jan 14 13:36:45.013727 kubelet[2966]: E0114 13:36:45.013685 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:srv-414dr.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-414dr.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 13:36:45.022074 systemd[1]: Created slice kubepods-besteffort-pod1c64e33b_9783_407b_a343_6cdb99d96d98.slice - libcontainer container kubepods-besteffort-pod1c64e33b_9783_407b_a343_6cdb99d96d98.slice. Jan 14 13:36:45.154530 systemd[1]: Created slice kubepods-besteffort-podc403b253_22d2_4d22_bd17_95f0faaf7b51.slice - libcontainer container kubepods-besteffort-podc403b253_22d2_4d22_bd17_95f0faaf7b51.slice. Jan 14 13:36:45.164929 kubelet[2966]: I0114 13:36:45.164736 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c64e33b-9783-407b-a343-6cdb99d96d98-tigera-ca-bundle\") pod \"calico-typha-64479c4d78-q6rk7\" (UID: \"1c64e33b-9783-407b-a343-6cdb99d96d98\") " pod="calico-system/calico-typha-64479c4d78-q6rk7" Jan 14 13:36:45.164929 kubelet[2966]: I0114 13:36:45.164791 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1c64e33b-9783-407b-a343-6cdb99d96d98-typha-certs\") pod \"calico-typha-64479c4d78-q6rk7\" (UID: \"1c64e33b-9783-407b-a343-6cdb99d96d98\") " pod="calico-system/calico-typha-64479c4d78-q6rk7" Jan 14 13:36:45.164929 kubelet[2966]: I0114 13:36:45.164824 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cth7g\" (UniqueName: \"kubernetes.io/projected/1c64e33b-9783-407b-a343-6cdb99d96d98-kube-api-access-cth7g\") pod \"calico-typha-64479c4d78-q6rk7\" (UID: \"1c64e33b-9783-407b-a343-6cdb99d96d98\") " pod="calico-system/calico-typha-64479c4d78-q6rk7" Jan 14 13:36:45.180000 audit[3368]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:45.180000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffce18d10d0 a2=0 a3=7ffce18d10bc items=0 ppid=3099 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.180000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:45.184000 audit[3368]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:45.184000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffce18d10d0 a2=0 a3=0 items=0 ppid=3099 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:45.261991 kubelet[2966]: E0114 13:36:45.261823 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:45.266472 kubelet[2966]: I0114 13:36:45.265843 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-cni-bin-dir\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.266843 kubelet[2966]: I0114 13:36:45.266702 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-policysync\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.266843 kubelet[2966]: I0114 13:36:45.266741 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c403b253-22d2-4d22-bd17-95f0faaf7b51-tigera-ca-bundle\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.266843 kubelet[2966]: I0114 13:36:45.266776 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-cni-log-dir\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.266843 kubelet[2966]: I0114 13:36:45.266807 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-flexvol-driver-host\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.267056 kubelet[2966]: I0114 13:36:45.266868 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdccb\" (UniqueName: \"kubernetes.io/projected/c403b253-22d2-4d22-bd17-95f0faaf7b51-kube-api-access-bdccb\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.267056 kubelet[2966]: I0114 13:36:45.266960 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-lib-modules\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.267137 kubelet[2966]: I0114 13:36:45.267120 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-var-run-calico\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.268594 kubelet[2966]: I0114 13:36:45.267260 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-var-lib-calico\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.268594 kubelet[2966]: I0114 13:36:45.267309 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c403b253-22d2-4d22-bd17-95f0faaf7b51-node-certs\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.268594 kubelet[2966]: I0114 13:36:45.267362 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-cni-net-dir\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.268594 kubelet[2966]: I0114 13:36:45.267408 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c403b253-22d2-4d22-bd17-95f0faaf7b51-xtables-lock\") pod \"calico-node-q9mmc\" (UID: \"c403b253-22d2-4d22-bd17-95f0faaf7b51\") " pod="calico-system/calico-node-q9mmc" Jan 14 13:36:45.368643 kubelet[2966]: I0114 13:36:45.368183 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1584f8ba-fd2c-4903-be8f-c6577809742f-socket-dir\") pod \"csi-node-driver-7v2t7\" (UID: \"1584f8ba-fd2c-4903-be8f-c6577809742f\") " pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:45.368643 kubelet[2966]: I0114 13:36:45.368362 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kg9g\" (UniqueName: \"kubernetes.io/projected/1584f8ba-fd2c-4903-be8f-c6577809742f-kube-api-access-7kg9g\") pod \"csi-node-driver-7v2t7\" (UID: \"1584f8ba-fd2c-4903-be8f-c6577809742f\") " pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:45.368643 kubelet[2966]: I0114 13:36:45.368461 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1584f8ba-fd2c-4903-be8f-c6577809742f-registration-dir\") pod \"csi-node-driver-7v2t7\" (UID: \"1584f8ba-fd2c-4903-be8f-c6577809742f\") " pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:45.368643 kubelet[2966]: I0114 13:36:45.368491 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1584f8ba-fd2c-4903-be8f-c6577809742f-kubelet-dir\") pod \"csi-node-driver-7v2t7\" (UID: \"1584f8ba-fd2c-4903-be8f-c6577809742f\") " pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:45.369082 kubelet[2966]: I0114 13:36:45.368536 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1584f8ba-fd2c-4903-be8f-c6577809742f-varrun\") pod \"csi-node-driver-7v2t7\" (UID: \"1584f8ba-fd2c-4903-be8f-c6577809742f\") " pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:45.370684 kubelet[2966]: E0114 13:36:45.370653 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.370684 kubelet[2966]: W0114 13:36:45.370680 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.371758 kubelet[2966]: E0114 13:36:45.371421 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.371758 kubelet[2966]: E0114 13:36:45.371675 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.371758 kubelet[2966]: W0114 13:36:45.371688 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.371758 kubelet[2966]: E0114 13:36:45.371706 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.372418 kubelet[2966]: E0114 13:36:45.372392 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.372418 kubelet[2966]: W0114 13:36:45.372412 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.372418 kubelet[2966]: E0114 13:36:45.372427 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.373396 kubelet[2966]: E0114 13:36:45.373341 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.373396 kubelet[2966]: W0114 13:36:45.373364 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.373396 kubelet[2966]: E0114 13:36:45.373382 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.374093 kubelet[2966]: E0114 13:36:45.374066 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.374093 kubelet[2966]: W0114 13:36:45.374080 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.374205 kubelet[2966]: E0114 13:36:45.374094 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.374681 kubelet[2966]: E0114 13:36:45.374631 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.374681 kubelet[2966]: W0114 13:36:45.374652 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.374681 kubelet[2966]: E0114 13:36:45.374668 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.376776 kubelet[2966]: E0114 13:36:45.375703 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.376776 kubelet[2966]: W0114 13:36:45.375750 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.376776 kubelet[2966]: E0114 13:36:45.375781 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.377768 kubelet[2966]: E0114 13:36:45.377747 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.379915 kubelet[2966]: W0114 13:36:45.379809 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.380189 kubelet[2966]: E0114 13:36:45.380163 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.381997 kubelet[2966]: E0114 13:36:45.381834 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.382360 kubelet[2966]: W0114 13:36:45.382182 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.382965 kubelet[2966]: E0114 13:36:45.382633 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.384702 kubelet[2966]: E0114 13:36:45.383630 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.384702 kubelet[2966]: W0114 13:36:45.383648 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.386804 kubelet[2966]: E0114 13:36:45.386636 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.386804 kubelet[2966]: W0114 13:36:45.386656 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.387250 kubelet[2966]: E0114 13:36:45.387064 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.387250 kubelet[2966]: W0114 13:36:45.387110 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.387250 kubelet[2966]: E0114 13:36:45.387132 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.387250 kubelet[2966]: E0114 13:36:45.387148 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.388605 kubelet[2966]: E0114 13:36:45.388105 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.388605 kubelet[2966]: W0114 13:36:45.388124 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.388605 kubelet[2966]: E0114 13:36:45.388140 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.388891 kubelet[2966]: E0114 13:36:45.388871 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.389211 kubelet[2966]: W0114 13:36:45.389187 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.389814 kubelet[2966]: E0114 13:36:45.389510 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.390051 kubelet[2966]: E0114 13:36:45.390032 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.390160 kubelet[2966]: W0114 13:36:45.390139 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.390297 kubelet[2966]: E0114 13:36:45.390277 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.390582 kubelet[2966]: E0114 13:36:45.390532 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.394232 kubelet[2966]: E0114 13:36:45.394032 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.394232 kubelet[2966]: W0114 13:36:45.394059 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.394232 kubelet[2966]: E0114 13:36:45.394078 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.463764 containerd[1649]: time="2026-01-14T13:36:45.463432685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9mmc,Uid:c403b253-22d2-4d22-bd17-95f0faaf7b51,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:45.470341 kubelet[2966]: E0114 13:36:45.470167 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.470341 kubelet[2966]: W0114 13:36:45.470197 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.470341 kubelet[2966]: E0114 13:36:45.470223 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.471068 kubelet[2966]: E0114 13:36:45.470844 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.471068 kubelet[2966]: W0114 13:36:45.470862 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.471068 kubelet[2966]: E0114 13:36:45.470886 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.471650 kubelet[2966]: E0114 13:36:45.471295 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.471650 kubelet[2966]: W0114 13:36:45.471312 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.471650 kubelet[2966]: E0114 13:36:45.471337 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.472025 kubelet[2966]: E0114 13:36:45.471756 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.472025 kubelet[2966]: W0114 13:36:45.471771 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.472025 kubelet[2966]: E0114 13:36:45.471790 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.472347 kubelet[2966]: E0114 13:36:45.472308 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.472347 kubelet[2966]: W0114 13:36:45.472327 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.472472 kubelet[2966]: E0114 13:36:45.472364 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.472642 kubelet[2966]: E0114 13:36:45.472623 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.472642 kubelet[2966]: W0114 13:36:45.472641 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.472883 kubelet[2966]: E0114 13:36:45.472673 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.473091 kubelet[2966]: E0114 13:36:45.473067 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.473091 kubelet[2966]: W0114 13:36:45.473087 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.473338 kubelet[2966]: E0114 13:36:45.473111 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.473881 kubelet[2966]: E0114 13:36:45.473756 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.473881 kubelet[2966]: W0114 13:36:45.473774 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.473881 kubelet[2966]: E0114 13:36:45.473865 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.474127 kubelet[2966]: E0114 13:36:45.474105 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.474127 kubelet[2966]: W0114 13:36:45.474124 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.474319 kubelet[2966]: E0114 13:36:45.474300 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.474510 kubelet[2966]: E0114 13:36:45.474383 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.474510 kubelet[2966]: W0114 13:36:45.474395 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.474510 kubelet[2966]: E0114 13:36:45.474461 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.474712 kubelet[2966]: E0114 13:36:45.474689 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.474712 kubelet[2966]: W0114 13:36:45.474702 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.474864 kubelet[2966]: E0114 13:36:45.474793 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.475009 kubelet[2966]: E0114 13:36:45.474986 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.475009 kubelet[2966]: W0114 13:36:45.475005 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.475323 kubelet[2966]: E0114 13:36:45.475215 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.475518 kubelet[2966]: E0114 13:36:45.475335 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.475518 kubelet[2966]: W0114 13:36:45.475348 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.475518 kubelet[2966]: E0114 13:36:45.475381 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.475786 kubelet[2966]: E0114 13:36:45.475603 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.475786 kubelet[2966]: W0114 13:36:45.475615 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.475786 kubelet[2966]: E0114 13:36:45.475657 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.475977 kubelet[2966]: E0114 13:36:45.475872 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.475977 kubelet[2966]: W0114 13:36:45.475884 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.476214 kubelet[2966]: E0114 13:36:45.476109 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.476214 kubelet[2966]: W0114 13:36:45.476126 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.476214 kubelet[2966]: E0114 13:36:45.476153 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.476214 kubelet[2966]: E0114 13:36:45.476153 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.477773 kubelet[2966]: E0114 13:36:45.476400 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.477773 kubelet[2966]: W0114 13:36:45.476415 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.477773 kubelet[2966]: E0114 13:36:45.476435 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.477773 kubelet[2966]: E0114 13:36:45.476731 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.477773 kubelet[2966]: W0114 13:36:45.476745 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.477773 kubelet[2966]: E0114 13:36:45.476773 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.477773 kubelet[2966]: E0114 13:36:45.477773 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.478146 kubelet[2966]: W0114 13:36:45.477787 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.480149 kubelet[2966]: E0114 13:36:45.480121 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.480327 kubelet[2966]: E0114 13:36:45.480289 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.480327 kubelet[2966]: W0114 13:36:45.480315 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.480327 kubelet[2966]: E0114 13:36:45.480338 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.481753 kubelet[2966]: E0114 13:36:45.481728 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.481753 kubelet[2966]: W0114 13:36:45.481748 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.481973 kubelet[2966]: E0114 13:36:45.481784 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.483493 kubelet[2966]: E0114 13:36:45.483365 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.483493 kubelet[2966]: W0114 13:36:45.483400 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.483493 kubelet[2966]: E0114 13:36:45.483449 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.486395 kubelet[2966]: E0114 13:36:45.486341 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.486395 kubelet[2966]: W0114 13:36:45.486361 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.486395 kubelet[2966]: E0114 13:36:45.486383 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.490065 kubelet[2966]: E0114 13:36:45.488692 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.490065 kubelet[2966]: W0114 13:36:45.488713 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.490065 kubelet[2966]: E0114 13:36:45.488730 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.490366 kubelet[2966]: E0114 13:36:45.490305 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.490792 kubelet[2966]: W0114 13:36:45.490769 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.490975 kubelet[2966]: E0114 13:36:45.490952 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.498320 containerd[1649]: time="2026-01-14T13:36:45.498197023Z" level=info msg="connecting to shim 0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4" address="unix:///run/containerd/s/caaa8c7b08ce88e535bdb6a6b0bca171157400af0d0de636c06d7a431e617e1d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:45.505666 kubelet[2966]: E0114 13:36:45.505642 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:45.505916 kubelet[2966]: W0114 13:36:45.505787 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:45.505916 kubelet[2966]: E0114 13:36:45.505838 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:45.544000 systemd[1]: Started cri-containerd-0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4.scope - libcontainer container 0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4. Jan 14 13:36:45.566000 audit: BPF prog-id=155 op=LOAD Jan 14 13:36:45.567000 audit: BPF prog-id=156 op=LOAD Jan 14 13:36:45.567000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.567000 audit: BPF prog-id=156 op=UNLOAD Jan 14 13:36:45.567000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.567000 audit: BPF prog-id=157 op=LOAD Jan 14 13:36:45.567000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.568000 audit: BPF prog-id=158 op=LOAD Jan 14 13:36:45.568000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.568000 audit: BPF prog-id=158 op=UNLOAD Jan 14 13:36:45.568000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.568000 audit: BPF prog-id=157 op=UNLOAD Jan 14 13:36:45.568000 audit[3435]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.568000 audit: BPF prog-id=159 op=LOAD Jan 14 13:36:45.568000 audit[3435]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3422 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:45.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062633032326364303862633934313532326332336535313039393062 Jan 14 13:36:45.593294 containerd[1649]: time="2026-01-14T13:36:45.593249134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q9mmc,Uid:c403b253-22d2-4d22-bd17-95f0faaf7b51,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\"" Jan 14 13:36:45.595682 containerd[1649]: time="2026-01-14T13:36:45.595618944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 13:36:46.206000 audit[3460]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:46.206000 audit[3460]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc488c4840 a2=0 a3=7ffc488c482c items=0 ppid=3099 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.206000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:46.211000 audit[3460]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3460 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:46.211000 audit[3460]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc488c4840 a2=0 a3=0 items=0 ppid=3099 pid=3460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:46.269979 kubelet[2966]: E0114 13:36:46.269892 2966 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 14 13:36:46.270481 kubelet[2966]: E0114 13:36:46.270039 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c64e33b-9783-407b-a343-6cdb99d96d98-typha-certs podName:1c64e33b-9783-407b-a343-6cdb99d96d98 nodeName:}" failed. No retries permitted until 2026-01-14 13:36:46.76999542 +0000 UTC m=+26.344068999 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/1c64e33b-9783-407b-a343-6cdb99d96d98-typha-certs") pod "calico-typha-64479c4d78-q6rk7" (UID: "1c64e33b-9783-407b-a343-6cdb99d96d98") : failed to sync secret cache: timed out waiting for the condition Jan 14 13:36:46.279637 kubelet[2966]: E0114 13:36:46.279607 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.279637 kubelet[2966]: W0114 13:36:46.279632 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.279799 kubelet[2966]: E0114 13:36:46.279722 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.381073 kubelet[2966]: E0114 13:36:46.381023 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.381073 kubelet[2966]: W0114 13:36:46.381061 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.381312 kubelet[2966]: E0114 13:36:46.381087 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.482517 kubelet[2966]: E0114 13:36:46.482401 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.482517 kubelet[2966]: W0114 13:36:46.482428 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.482517 kubelet[2966]: E0114 13:36:46.482452 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.583608 kubelet[2966]: E0114 13:36:46.583534 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.583608 kubelet[2966]: W0114 13:36:46.583590 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.583608 kubelet[2966]: E0114 13:36:46.583616 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.685019 kubelet[2966]: E0114 13:36:46.684958 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.685318 kubelet[2966]: W0114 13:36:46.684989 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.685318 kubelet[2966]: E0114 13:36:46.685234 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.786438 kubelet[2966]: E0114 13:36:46.786195 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.786438 kubelet[2966]: W0114 13:36:46.786238 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.786438 kubelet[2966]: E0114 13:36:46.786265 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.787259 kubelet[2966]: E0114 13:36:46.787216 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.787458 kubelet[2966]: W0114 13:36:46.787374 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.787458 kubelet[2966]: E0114 13:36:46.787393 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.787948 kubelet[2966]: E0114 13:36:46.787930 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.788138 kubelet[2966]: W0114 13:36:46.788057 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.788138 kubelet[2966]: E0114 13:36:46.788089 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.788686 kubelet[2966]: E0114 13:36:46.788643 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.788686 kubelet[2966]: W0114 13:36:46.788660 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.788881 kubelet[2966]: E0114 13:36:46.788813 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.789605 kubelet[2966]: E0114 13:36:46.789279 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.789605 kubelet[2966]: W0114 13:36:46.789295 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.789605 kubelet[2966]: E0114 13:36:46.789320 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.798849 kubelet[2966]: E0114 13:36:46.798828 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 13:36:46.798949 kubelet[2966]: W0114 13:36:46.798848 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 13:36:46.798949 kubelet[2966]: E0114 13:36:46.798931 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 13:36:46.828460 containerd[1649]: time="2026-01-14T13:36:46.828410335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64479c4d78-q6rk7,Uid:1c64e33b-9783-407b-a343-6cdb99d96d98,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:46.859096 containerd[1649]: time="2026-01-14T13:36:46.858917078Z" level=info msg="connecting to shim 54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b" address="unix:///run/containerd/s/eae2c2bf0007dce26a0b357168c217081ae03a1c91c6c2fae6ab7b75d7ab7128" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:36:46.900850 systemd[1]: Started cri-containerd-54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b.scope - libcontainer container 54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b. Jan 14 13:36:46.920000 audit: BPF prog-id=160 op=LOAD Jan 14 13:36:46.921000 audit: BPF prog-id=161 op=LOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=161 op=UNLOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=162 op=LOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=163 op=LOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=163 op=UNLOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=162 op=UNLOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.921000 audit: BPF prog-id=164 op=LOAD Jan 14 13:36:46.921000 audit[3493]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3482 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:46.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534323735656365313130626265666530376664646530373535313465 Jan 14 13:36:46.977475 containerd[1649]: time="2026-01-14T13:36:46.977189124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64479c4d78-q6rk7,Uid:1c64e33b-9783-407b-a343-6cdb99d96d98,Namespace:calico-system,Attempt:0,} returns sandbox id \"54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b\"" Jan 14 13:36:47.294976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1969964762.mount: Deactivated successfully. Jan 14 13:36:47.484602 containerd[1649]: time="2026-01-14T13:36:47.484452997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:47.486950 containerd[1649]: time="2026-01-14T13:36:47.486900374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 14 13:36:47.488224 containerd[1649]: time="2026-01-14T13:36:47.487225907Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:47.491655 containerd[1649]: time="2026-01-14T13:36:47.490773912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:47.492067 containerd[1649]: time="2026-01-14T13:36:47.492033971Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.895011774s" Jan 14 13:36:47.492211 containerd[1649]: time="2026-01-14T13:36:47.492185324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 13:36:47.496052 containerd[1649]: time="2026-01-14T13:36:47.496018939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 13:36:47.496595 containerd[1649]: time="2026-01-14T13:36:47.496517024Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 13:36:47.538308 containerd[1649]: time="2026-01-14T13:36:47.537859413Z" level=info msg="Container 1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:47.545094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3209193914.mount: Deactivated successfully. Jan 14 13:36:47.560055 containerd[1649]: time="2026-01-14T13:36:47.559523954Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a\"" Jan 14 13:36:47.561773 containerd[1649]: time="2026-01-14T13:36:47.561725738Z" level=info msg="StartContainer for \"1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a\"" Jan 14 13:36:47.564884 containerd[1649]: time="2026-01-14T13:36:47.564775391Z" level=info msg="connecting to shim 1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a" address="unix:///run/containerd/s/caaa8c7b08ce88e535bdb6a6b0bca171157400af0d0de636c06d7a431e617e1d" protocol=ttrpc version=3 Jan 14 13:36:47.596968 systemd[1]: Started cri-containerd-1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a.scope - libcontainer container 1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a. Jan 14 13:36:47.639468 kubelet[2966]: E0114 13:36:47.639153 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:47.669000 audit: BPF prog-id=165 op=LOAD Jan 14 13:36:47.669000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3422 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:47.669000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161643162363863373033343264313439316337366339356266313134 Jan 14 13:36:47.670000 audit: BPF prog-id=166 op=LOAD Jan 14 13:36:47.670000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3422 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:47.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161643162363863373033343264313439316337366339356266313134 Jan 14 13:36:47.670000 audit: BPF prog-id=166 op=UNLOAD Jan 14 13:36:47.670000 audit[3525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:47.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161643162363863373033343264313439316337366339356266313134 Jan 14 13:36:47.670000 audit: BPF prog-id=165 op=UNLOAD Jan 14 13:36:47.670000 audit[3525]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:47.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161643162363863373033343264313439316337366339356266313134 Jan 14 13:36:47.670000 audit: BPF prog-id=167 op=LOAD Jan 14 13:36:47.670000 audit[3525]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3422 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:47.670000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161643162363863373033343264313439316337366339356266313134 Jan 14 13:36:47.709964 containerd[1649]: time="2026-01-14T13:36:47.709906262Z" level=info msg="StartContainer for \"1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a\" returns successfully" Jan 14 13:36:47.747499 systemd[1]: cri-containerd-1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a.scope: Deactivated successfully. Jan 14 13:36:47.748772 systemd[1]: cri-containerd-1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a.scope: Consumed 62ms CPU time, 6M memory peak, 4.6M written to disk. Jan 14 13:36:47.753000 audit: BPF prog-id=167 op=UNLOAD Jan 14 13:36:47.794996 containerd[1649]: time="2026-01-14T13:36:47.794856706Z" level=info msg="received container exit event container_id:\"1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a\" id:\"1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a\" pid:3538 exited_at:{seconds:1768397807 nanos:755485961}" Jan 14 13:36:47.842910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ad1b68c70342d1491c76c95bf1142aa7b5c9dbcc1711b659bd8591444c4141a-rootfs.mount: Deactivated successfully. Jan 14 13:36:49.638530 kubelet[2966]: E0114 13:36:49.638358 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:50.956799 containerd[1649]: time="2026-01-14T13:36:50.956723555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.958539 containerd[1649]: time="2026-01-14T13:36:50.958479978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 13:36:50.959216 containerd[1649]: time="2026-01-14T13:36:50.959140663Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.962610 containerd[1649]: time="2026-01-14T13:36:50.961993418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:50.964038 containerd[1649]: time="2026-01-14T13:36:50.964001442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.467941335s" Jan 14 13:36:50.964119 containerd[1649]: time="2026-01-14T13:36:50.964047022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 13:36:50.971707 containerd[1649]: time="2026-01-14T13:36:50.971668652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 13:36:50.989905 containerd[1649]: time="2026-01-14T13:36:50.989770878Z" level=info msg="CreateContainer within sandbox \"54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 13:36:51.000593 containerd[1649]: time="2026-01-14T13:36:50.998907383Z" level=info msg="Container b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:51.006289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281952192.mount: Deactivated successfully. Jan 14 13:36:51.019899 containerd[1649]: time="2026-01-14T13:36:51.019787810Z" level=info msg="CreateContainer within sandbox \"54275ece110bbefe07fdde075514e3e61ee7ef17ccb10035adb4b15e19bd7d8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0\"" Jan 14 13:36:51.020804 containerd[1649]: time="2026-01-14T13:36:51.020730045Z" level=info msg="StartContainer for \"b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0\"" Jan 14 13:36:51.036156 containerd[1649]: time="2026-01-14T13:36:51.036060003Z" level=info msg="connecting to shim b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0" address="unix:///run/containerd/s/eae2c2bf0007dce26a0b357168c217081ae03a1c91c6c2fae6ab7b75d7ab7128" protocol=ttrpc version=3 Jan 14 13:36:51.088898 systemd[1]: Started cri-containerd-b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0.scope - libcontainer container b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0. Jan 14 13:36:51.129546 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 14 13:36:51.129823 kernel: audit: type=1334 audit(1768397811.121:563): prog-id=168 op=LOAD Jan 14 13:36:51.121000 audit: BPF prog-id=168 op=LOAD Jan 14 13:36:51.130000 audit: BPF prog-id=169 op=LOAD Jan 14 13:36:51.134588 kernel: audit: type=1334 audit(1768397811.130:564): prog-id=169 op=LOAD Jan 14 13:36:51.130000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.142129 kernel: audit: type=1300 audit(1768397811.130:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.142367 kernel: audit: type=1327 audit(1768397811.130:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.131000 audit: BPF prog-id=169 op=UNLOAD Jan 14 13:36:51.131000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.149012 kernel: audit: type=1334 audit(1768397811.131:565): prog-id=169 op=UNLOAD Jan 14 13:36:51.149088 kernel: audit: type=1300 audit(1768397811.131:565): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.154155 kernel: audit: type=1327 audit(1768397811.131:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.132000 audit: BPF prog-id=170 op=LOAD Jan 14 13:36:51.158101 kernel: audit: type=1334 audit(1768397811.132:566): prog-id=170 op=LOAD Jan 14 13:36:51.132000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.160656 kernel: audit: type=1300 audit(1768397811.132:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.165821 kernel: audit: type=1327 audit(1768397811.132:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.132000 audit: BPF prog-id=171 op=LOAD Jan 14 13:36:51.132000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.132000 audit: BPF prog-id=171 op=UNLOAD Jan 14 13:36:51.132000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.132000 audit: BPF prog-id=170 op=UNLOAD Jan 14 13:36:51.132000 audit[3585]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.132000 audit: BPF prog-id=172 op=LOAD Jan 14 13:36:51.132000 audit[3585]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3482 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236356232616362303134653432333266376463316462623736633838 Jan 14 13:36:51.217814 containerd[1649]: time="2026-01-14T13:36:51.217425647Z" level=info msg="StartContainer for \"b65b2acb014e4232f7dc1dbb76c886f56a09d7cd822c7c04ed098b924ad1a8e0\" returns successfully" Jan 14 13:36:51.638651 kubelet[2966]: E0114 13:36:51.638374 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:51.819603 kubelet[2966]: I0114 13:36:51.818855 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64479c4d78-q6rk7" podStartSLOduration=3.831783482 podStartE2EDuration="7.818773324s" podCreationTimestamp="2026-01-14 13:36:44 +0000 UTC" firstStartedPulling="2026-01-14 13:36:46.979592458 +0000 UTC m=+26.553666035" lastFinishedPulling="2026-01-14 13:36:50.96658229 +0000 UTC m=+30.540655877" observedRunningTime="2026-01-14 13:36:51.818405129 +0000 UTC m=+31.392478741" watchObservedRunningTime="2026-01-14 13:36:51.818773324 +0000 UTC m=+31.392846916" Jan 14 13:36:51.863000 audit[3620]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:51.863000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcd9bdb590 a2=0 a3=7ffcd9bdb57c items=0 ppid=3099 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.863000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:51.870000 audit[3620]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:36:51.870000 audit[3620]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcd9bdb590 a2=0 a3=7ffcd9bdb57c items=0 ppid=3099 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:51.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:36:53.642679 kubelet[2966]: E0114 13:36:53.642038 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:55.648318 kubelet[2966]: E0114 13:36:55.648176 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:56.863352 containerd[1649]: time="2026-01-14T13:36:56.863293921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:56.864595 containerd[1649]: time="2026-01-14T13:36:56.864441659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 13:36:56.865644 containerd[1649]: time="2026-01-14T13:36:56.865608200Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:56.868616 containerd[1649]: time="2026-01-14T13:36:56.868537206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:36:56.869684 containerd[1649]: time="2026-01-14T13:36:56.869500711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.89777759s" Jan 14 13:36:56.869684 containerd[1649]: time="2026-01-14T13:36:56.869539186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 13:36:56.876221 containerd[1649]: time="2026-01-14T13:36:56.875583564Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 13:36:56.891780 containerd[1649]: time="2026-01-14T13:36:56.891737716Z" level=info msg="Container d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:36:56.912749 containerd[1649]: time="2026-01-14T13:36:56.912697346Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5\"" Jan 14 13:36:56.913858 containerd[1649]: time="2026-01-14T13:36:56.913827000Z" level=info msg="StartContainer for \"d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5\"" Jan 14 13:36:56.917636 containerd[1649]: time="2026-01-14T13:36:56.917548206Z" level=info msg="connecting to shim d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5" address="unix:///run/containerd/s/caaa8c7b08ce88e535bdb6a6b0bca171157400af0d0de636c06d7a431e617e1d" protocol=ttrpc version=3 Jan 14 13:36:56.953829 systemd[1]: Started cri-containerd-d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5.scope - libcontainer container d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5. Jan 14 13:36:57.038629 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 14 13:36:57.039049 kernel: audit: type=1334 audit(1768397817.032:573): prog-id=173 op=LOAD Jan 14 13:36:57.032000 audit: BPF prog-id=173 op=LOAD Jan 14 13:36:57.032000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.047162 kernel: audit: type=1300 audit(1768397817.032:573): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.047295 kernel: audit: type=1327 audit(1768397817.032:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.038000 audit: BPF prog-id=174 op=LOAD Jan 14 13:36:57.051068 kernel: audit: type=1334 audit(1768397817.038:574): prog-id=174 op=LOAD Jan 14 13:36:57.038000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.053812 kernel: audit: type=1300 audit(1768397817.038:574): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.059092 kernel: audit: type=1327 audit(1768397817.038:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.039000 audit: BPF prog-id=174 op=UNLOAD Jan 14 13:36:57.062987 kernel: audit: type=1334 audit(1768397817.039:575): prog-id=174 op=UNLOAD Jan 14 13:36:57.039000 audit[3630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.070593 kernel: audit: type=1300 audit(1768397817.039:575): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.076632 kernel: audit: type=1327 audit(1768397817.039:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.039000 audit: BPF prog-id=173 op=UNLOAD Jan 14 13:36:57.039000 audit[3630]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.078625 kernel: audit: type=1334 audit(1768397817.039:576): prog-id=173 op=UNLOAD Jan 14 13:36:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.039000 audit: BPF prog-id=175 op=LOAD Jan 14 13:36:57.039000 audit[3630]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3422 pid=3630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:36:57.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6439396663303562306630633338666131623362353064383532333637 Jan 14 13:36:57.122406 containerd[1649]: time="2026-01-14T13:36:57.122340438Z" level=info msg="StartContainer for \"d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5\" returns successfully" Jan 14 13:36:57.639369 kubelet[2966]: E0114 13:36:57.638834 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:36:58.131640 systemd[1]: cri-containerd-d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5.scope: Deactivated successfully. Jan 14 13:36:58.134000 audit: BPF prog-id=175 op=UNLOAD Jan 14 13:36:58.132838 systemd[1]: cri-containerd-d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5.scope: Consumed 755ms CPU time, 166.3M memory peak, 8.8M read from disk, 171.3M written to disk. Jan 14 13:36:58.165551 containerd[1649]: time="2026-01-14T13:36:58.165369065Z" level=info msg="received container exit event container_id:\"d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5\" id:\"d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5\" pid:3644 exited_at:{seconds:1768397818 nanos:153274523}" Jan 14 13:36:58.191025 kubelet[2966]: I0114 13:36:58.189206 2966 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 13:36:58.264108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d99fc05b0f0c38fa1b3b50d85236776dec79a3fd0917728cd7453bfdf613e6e5-rootfs.mount: Deactivated successfully. Jan 14 13:36:58.267935 systemd[1]: Created slice kubepods-burstable-podf6b96f88_9628_4cd4_a240_d0aa195ee125.slice - libcontainer container kubepods-burstable-podf6b96f88_9628_4cd4_a240_d0aa195ee125.slice. Jan 14 13:36:58.287065 systemd[1]: Created slice kubepods-besteffort-pod09be4418_7a52_4e16_b65e_453c324deb2d.slice - libcontainer container kubepods-besteffort-pod09be4418_7a52_4e16_b65e_453c324deb2d.slice. Jan 14 13:36:58.337371 systemd[1]: Created slice kubepods-besteffort-pod9af9fcdf_2905_4c21_b8a3_70ab543f6a40.slice - libcontainer container kubepods-besteffort-pod9af9fcdf_2905_4c21_b8a3_70ab543f6a40.slice. Jan 14 13:36:58.351881 systemd[1]: Created slice kubepods-burstable-pod7031cc44_2bc3_4894_9b05_65dab6c01c28.slice - libcontainer container kubepods-burstable-pod7031cc44_2bc3_4894_9b05_65dab6c01c28.slice. Jan 14 13:36:58.371199 systemd[1]: Created slice kubepods-besteffort-pod23681fa6_27ca_4d7d_86fa_c674a7318b4d.slice - libcontainer container kubepods-besteffort-pod23681fa6_27ca_4d7d_86fa_c674a7318b4d.slice. Jan 14 13:36:58.386155 systemd[1]: Created slice kubepods-besteffort-pod55d7c70a_6e6b_4527_8616_3cbcdf2d3394.slice - libcontainer container kubepods-besteffort-pod55d7c70a_6e6b_4527_8616_3cbcdf2d3394.slice. Jan 14 13:36:58.391540 kubelet[2966]: I0114 13:36:58.391008 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7031cc44-2bc3-4894-9b05-65dab6c01c28-config-volume\") pod \"coredns-668d6bf9bc-6s6kg\" (UID: \"7031cc44-2bc3-4894-9b05-65dab6c01c28\") " pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:36:58.394100 kubelet[2966]: I0114 13:36:58.393755 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6v8v\" (UniqueName: \"kubernetes.io/projected/23681fa6-27ca-4d7d-86fa-c674a7318b4d-kube-api-access-x6v8v\") pod \"calico-apiserver-556cb8cff8-42gh4\" (UID: \"23681fa6-27ca-4d7d-86fa-c674a7318b4d\") " pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" Jan 14 13:36:58.399916 kubelet[2966]: I0114 13:36:58.396628 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfm86\" (UniqueName: \"kubernetes.io/projected/55d7c70a-6e6b-4527-8616-3cbcdf2d3394-kube-api-access-bfm86\") pod \"goldmane-666569f655-wtc4j\" (UID: \"55d7c70a-6e6b-4527-8616-3cbcdf2d3394\") " pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:58.399916 kubelet[2966]: I0114 13:36:58.396670 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6b96f88-9628-4cd4-a240-d0aa195ee125-config-volume\") pod \"coredns-668d6bf9bc-tndpb\" (UID: \"f6b96f88-9628-4cd4-a240-d0aa195ee125\") " pod="kube-system/coredns-668d6bf9bc-tndpb" Jan 14 13:36:58.399916 kubelet[2966]: I0114 13:36:58.396722 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/55d7c70a-6e6b-4527-8616-3cbcdf2d3394-config\") pod \"goldmane-666569f655-wtc4j\" (UID: \"55d7c70a-6e6b-4527-8616-3cbcdf2d3394\") " pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:58.399916 kubelet[2966]: I0114 13:36:58.396784 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-backend-key-pair\") pod \"whisker-568d698658-t9psg\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " pod="calico-system/whisker-568d698658-t9psg" Jan 14 13:36:58.399916 kubelet[2966]: I0114 13:36:58.396830 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mw6\" (UniqueName: \"kubernetes.io/projected/f6b96f88-9628-4cd4-a240-d0aa195ee125-kube-api-access-w4mw6\") pod \"coredns-668d6bf9bc-tndpb\" (UID: \"f6b96f88-9628-4cd4-a240-d0aa195ee125\") " pod="kube-system/coredns-668d6bf9bc-tndpb" Jan 14 13:36:58.398208 systemd[1]: Created slice kubepods-besteffort-pod63b80aee_dca1_4c6e_95c1_ebc4781e5795.slice - libcontainer container kubepods-besteffort-pod63b80aee_dca1_4c6e_95c1_ebc4781e5795.slice. Jan 14 13:36:58.400521 kubelet[2966]: I0114 13:36:58.396877 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-ca-bundle\") pod \"whisker-568d698658-t9psg\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " pod="calico-system/whisker-568d698658-t9psg" Jan 14 13:36:58.400521 kubelet[2966]: I0114 13:36:58.396924 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ffp4\" (UniqueName: \"kubernetes.io/projected/7031cc44-2bc3-4894-9b05-65dab6c01c28-kube-api-access-4ffp4\") pod \"coredns-668d6bf9bc-6s6kg\" (UID: \"7031cc44-2bc3-4894-9b05-65dab6c01c28\") " pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:36:58.400521 kubelet[2966]: I0114 13:36:58.396952 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwfqx\" (UniqueName: \"kubernetes.io/projected/63b80aee-dca1-4c6e-95c1-ebc4781e5795-kube-api-access-wwfqx\") pod \"whisker-568d698658-t9psg\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " pod="calico-system/whisker-568d698658-t9psg" Jan 14 13:36:58.400521 kubelet[2966]: I0114 13:36:58.396990 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/23681fa6-27ca-4d7d-86fa-c674a7318b4d-calico-apiserver-certs\") pod \"calico-apiserver-556cb8cff8-42gh4\" (UID: \"23681fa6-27ca-4d7d-86fa-c674a7318b4d\") " pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" Jan 14 13:36:58.400521 kubelet[2966]: I0114 13:36:58.397023 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/55d7c70a-6e6b-4527-8616-3cbcdf2d3394-goldmane-key-pair\") pod \"goldmane-666569f655-wtc4j\" (UID: \"55d7c70a-6e6b-4527-8616-3cbcdf2d3394\") " pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:58.400864 kubelet[2966]: I0114 13:36:58.397064 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09be4418-7a52-4e16-b65e-453c324deb2d-calico-apiserver-certs\") pod \"calico-apiserver-556cb8cff8-69s5c\" (UID: \"09be4418-7a52-4e16-b65e-453c324deb2d\") " pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:36:58.400864 kubelet[2966]: I0114 13:36:58.397096 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9af9fcdf-2905-4c21-b8a3-70ab543f6a40-tigera-ca-bundle\") pod \"calico-kube-controllers-b65bb7cd-xwm2s\" (UID: \"9af9fcdf-2905-4c21-b8a3-70ab543f6a40\") " pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" Jan 14 13:36:58.400864 kubelet[2966]: I0114 13:36:58.397122 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fmsb\" (UniqueName: \"kubernetes.io/projected/9af9fcdf-2905-4c21-b8a3-70ab543f6a40-kube-api-access-5fmsb\") pod \"calico-kube-controllers-b65bb7cd-xwm2s\" (UID: \"9af9fcdf-2905-4c21-b8a3-70ab543f6a40\") " pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" Jan 14 13:36:58.400864 kubelet[2966]: I0114 13:36:58.397162 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhl2p\" (UniqueName: \"kubernetes.io/projected/09be4418-7a52-4e16-b65e-453c324deb2d-kube-api-access-qhl2p\") pod \"calico-apiserver-556cb8cff8-69s5c\" (UID: \"09be4418-7a52-4e16-b65e-453c324deb2d\") " pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:36:58.400864 kubelet[2966]: I0114 13:36:58.397207 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55d7c70a-6e6b-4527-8616-3cbcdf2d3394-goldmane-ca-bundle\") pod \"goldmane-666569f655-wtc4j\" (UID: \"55d7c70a-6e6b-4527-8616-3cbcdf2d3394\") " pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:58.591694 containerd[1649]: time="2026-01-14T13:36:58.591376337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tndpb,Uid:f6b96f88-9628-4cd4-a240-d0aa195ee125,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:58.614119 containerd[1649]: time="2026-01-14T13:36:58.613657527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:36:58.658591 containerd[1649]: time="2026-01-14T13:36:58.657594398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b65bb7cd-xwm2s,Uid:9af9fcdf-2905-4c21-b8a3-70ab543f6a40,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:58.680605 containerd[1649]: time="2026-01-14T13:36:58.679760163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-42gh4,Uid:23681fa6-27ca-4d7d-86fa-c674a7318b4d,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:36:58.682686 containerd[1649]: time="2026-01-14T13:36:58.682636249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,}" Jan 14 13:36:58.699051 containerd[1649]: time="2026-01-14T13:36:58.699005774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:58.707145 containerd[1649]: time="2026-01-14T13:36:58.707107244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568d698658-t9psg,Uid:63b80aee-dca1-4c6e-95c1-ebc4781e5795,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:58.900623 containerd[1649]: time="2026-01-14T13:36:58.897684419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 13:36:59.044992 containerd[1649]: time="2026-01-14T13:36:59.044411165Z" level=error msg="Failed to destroy network for sandbox \"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.048839 containerd[1649]: time="2026-01-14T13:36:59.048752944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-42gh4,Uid:23681fa6-27ca-4d7d-86fa-c674a7318b4d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.051682 kubelet[2966]: E0114 13:36:59.051493 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.053194 kubelet[2966]: E0114 13:36:59.052319 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" Jan 14 13:36:59.053194 kubelet[2966]: E0114 13:36:59.052630 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" Jan 14 13:36:59.053194 kubelet[2966]: E0114 13:36:59.052739 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a309a3baef77b44e295c6b060f627baef5cb8cf687f7c36dc40e64825b2046e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:36:59.053420 containerd[1649]: time="2026-01-14T13:36:59.052126608Z" level=error msg="Failed to destroy network for sandbox \"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.058133 containerd[1649]: time="2026-01-14T13:36:59.057994958Z" level=error msg="Failed to destroy network for sandbox \"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.059475 containerd[1649]: time="2026-01-14T13:36:59.059294224Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tndpb,Uid:f6b96f88-9628-4cd4-a240-d0aa195ee125,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.070705 kubelet[2966]: E0114 13:36:59.070127 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.070705 kubelet[2966]: E0114 13:36:59.070207 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tndpb" Jan 14 13:36:59.070705 kubelet[2966]: E0114 13:36:59.070245 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tndpb" Jan 14 13:36:59.070928 kubelet[2966]: E0114 13:36:59.070328 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tndpb_kube-system(f6b96f88-9628-4cd4-a240-d0aa195ee125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tndpb_kube-system(f6b96f88-9628-4cd4-a240-d0aa195ee125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f76bc150f370cf6283cb2ce13d2ea2c83b3444888947ae3c270a5991abc7460\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tndpb" podUID="f6b96f88-9628-4cd4-a240-d0aa195ee125" Jan 14 13:36:59.073261 containerd[1649]: time="2026-01-14T13:36:59.073211568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.073469 kubelet[2966]: E0114 13:36:59.073408 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.073847 kubelet[2966]: E0114 13:36:59.073478 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:36:59.073847 kubelet[2966]: E0114 13:36:59.073505 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:36:59.073847 kubelet[2966]: E0114 13:36:59.073592 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c572fd52844e876a80f385927b4d700c601b6a3ec90130402777a429dad5c5cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:36:59.079359 containerd[1649]: time="2026-01-14T13:36:59.079305767Z" level=error msg="Failed to destroy network for sandbox \"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.083325 containerd[1649]: time="2026-01-14T13:36:59.083260503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-568d698658-t9psg,Uid:63b80aee-dca1-4c6e-95c1-ebc4781e5795,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.084620 kubelet[2966]: E0114 13:36:59.083510 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.085151 kubelet[2966]: E0114 13:36:59.084749 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-568d698658-t9psg" Jan 14 13:36:59.085151 kubelet[2966]: E0114 13:36:59.084790 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-568d698658-t9psg" Jan 14 13:36:59.085151 kubelet[2966]: E0114 13:36:59.084860 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-568d698658-t9psg_calico-system(63b80aee-dca1-4c6e-95c1-ebc4781e5795)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-568d698658-t9psg_calico-system(63b80aee-dca1-4c6e-95c1-ebc4781e5795)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2834a3c6d33ebb2236152c9358e0abd8a15e5635838e9bee47965c455e5c9b0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-568d698658-t9psg" podUID="63b80aee-dca1-4c6e-95c1-ebc4781e5795" Jan 14 13:36:59.085509 containerd[1649]: time="2026-01-14T13:36:59.085475267Z" level=error msg="Failed to destroy network for sandbox \"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.089028 containerd[1649]: time="2026-01-14T13:36:59.088981564Z" level=error msg="Failed to destroy network for sandbox \"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.089862 containerd[1649]: time="2026-01-14T13:36:59.089794269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.090833 kubelet[2966]: E0114 13:36:59.090787 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.090929 kubelet[2966]: E0114 13:36:59.090867 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:36:59.090929 kubelet[2966]: E0114 13:36:59.090898 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:36:59.091387 kubelet[2966]: E0114 13:36:59.091071 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6s6kg_kube-system(7031cc44-2bc3-4894-9b05-65dab6c01c28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6s6kg_kube-system(7031cc44-2bc3-4894-9b05-65dab6c01c28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7722e9be01960b07bffc048c51d676946320a288e4a1ac8646e389d9f6b8bf57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6s6kg" podUID="7031cc44-2bc3-4894-9b05-65dab6c01c28" Jan 14 13:36:59.093028 containerd[1649]: time="2026-01-14T13:36:59.092724818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b65bb7cd-xwm2s,Uid:9af9fcdf-2905-4c21-b8a3-70ab543f6a40,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.095930 containerd[1649]: time="2026-01-14T13:36:59.095803989Z" level=error msg="Failed to destroy network for sandbox \"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.097709 kubelet[2966]: E0114 13:36:59.097631 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.097709 kubelet[2966]: E0114 13:36:59.097681 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" Jan 14 13:36:59.097934 kubelet[2966]: E0114 13:36:59.097707 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" Jan 14 13:36:59.097934 kubelet[2966]: E0114 13:36:59.097797 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796e3270f339ba005080b536c5bf4dc103d762bf45ce6e37676ebe222271627f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:36:59.098337 containerd[1649]: time="2026-01-14T13:36:59.097799206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.098521 kubelet[2966]: E0114 13:36:59.098387 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.098521 kubelet[2966]: E0114 13:36:59.098466 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:59.098521 kubelet[2966]: E0114 13:36:59.098495 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:36:59.099871 kubelet[2966]: E0114 13:36:59.098937 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a886f3f5d55bcfd9e4df48a43d08c95abd279b8a15c0254b4903e9c91aff7abd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:36:59.647814 systemd[1]: Created slice kubepods-besteffort-pod1584f8ba_fd2c_4903_be8f_c6577809742f.slice - libcontainer container kubepods-besteffort-pod1584f8ba_fd2c_4903_be8f_c6577809742f.slice. Jan 14 13:36:59.652685 containerd[1649]: time="2026-01-14T13:36:59.652634802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7v2t7,Uid:1584f8ba-fd2c-4903-be8f-c6577809742f,Namespace:calico-system,Attempt:0,}" Jan 14 13:36:59.731797 containerd[1649]: time="2026-01-14T13:36:59.731738940Z" level=error msg="Failed to destroy network for sandbox \"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.735234 systemd[1]: run-netns-cni\x2d3afc6259\x2d108f\x2d3d5b\x2ddc42\x2dbfdc04365503.mount: Deactivated successfully. Jan 14 13:36:59.736071 kubelet[2966]: E0114 13:36:59.735620 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.736071 kubelet[2966]: E0114 13:36:59.735678 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:59.736071 kubelet[2966]: E0114 13:36:59.735799 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7v2t7" Jan 14 13:36:59.736236 containerd[1649]: time="2026-01-14T13:36:59.735287554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7v2t7,Uid:1584f8ba-fd2c-4903-be8f-c6577809742f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:36:59.736655 kubelet[2966]: E0114 13:36:59.735873 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7ec50d1a6a3343a637d70e9e2d008bf591ad3311261de563309d036bfe86658\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:09.665378 containerd[1649]: time="2026-01-14T13:37:09.665231946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:10.006508 containerd[1649]: time="2026-01-14T13:37:10.005880340Z" level=error msg="Failed to destroy network for sandbox \"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.009736 containerd[1649]: time="2026-01-14T13:37:10.008391156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.011300 systemd[1]: run-netns-cni\x2d15eb5daf\x2da286\x2dd0bf\x2dd2a2\x2df57136bfabdf.mount: Deactivated successfully. Jan 14 13:37:10.013123 kubelet[2966]: E0114 13:37:10.012775 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.013123 kubelet[2966]: E0114 13:37:10.012921 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:37:10.013123 kubelet[2966]: E0114 13:37:10.012992 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6s6kg" Jan 14 13:37:10.017129 kubelet[2966]: E0114 13:37:10.013115 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6s6kg_kube-system(7031cc44-2bc3-4894-9b05-65dab6c01c28)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6s6kg_kube-system(7031cc44-2bc3-4894-9b05-65dab6c01c28)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f22ddabd36c0cac0aa6c9833d798a4ba1ba5ca2a6c7e8be332de70cf1615305b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6s6kg" podUID="7031cc44-2bc3-4894-9b05-65dab6c01c28" Jan 14 13:37:10.662693 containerd[1649]: time="2026-01-14T13:37:10.641384467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:37:10.834949 containerd[1649]: time="2026-01-14T13:37:10.834868072Z" level=error msg="Failed to destroy network for sandbox \"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.838875 systemd[1]: run-netns-cni\x2daf209b50\x2dfb6f\x2dd7bb\x2d136b\x2d65e884ab6eaf.mount: Deactivated successfully. Jan 14 13:37:10.841487 containerd[1649]: time="2026-01-14T13:37:10.841433060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.841942 kubelet[2966]: E0114 13:37:10.841725 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:10.842074 kubelet[2966]: E0114 13:37:10.841975 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:37:10.842263 kubelet[2966]: E0114 13:37:10.842109 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" Jan 14 13:37:10.842587 kubelet[2966]: E0114 13:37:10.842429 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fac8c45d4b1cfe2ff2a72c4a78f5f4a6b09bd834ec6930016f04b16d2092dc35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:11.189463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3137757738.mount: Deactivated successfully. Jan 14 13:37:11.236340 containerd[1649]: time="2026-01-14T13:37:11.235649525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:11.240325 containerd[1649]: time="2026-01-14T13:37:11.239178050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 13:37:11.246730 containerd[1649]: time="2026-01-14T13:37:11.246698491Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:11.250164 containerd[1649]: time="2026-01-14T13:37:11.250110039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 13:37:11.251183 containerd[1649]: time="2026-01-14T13:37:11.251145808Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.3473408s" Jan 14 13:37:11.251340 containerd[1649]: time="2026-01-14T13:37:11.251312489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 13:37:11.291880 containerd[1649]: time="2026-01-14T13:37:11.291817229Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 13:37:11.363488 containerd[1649]: time="2026-01-14T13:37:11.358516365Z" level=info msg="Container f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:37:11.365083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74521320.mount: Deactivated successfully. Jan 14 13:37:11.380642 containerd[1649]: time="2026-01-14T13:37:11.380498341Z" level=info msg="CreateContainer within sandbox \"0bc022cd08bc941522c23e510990bfb1357a2cf03586ab99a1f5a2453a6b32c4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5\"" Jan 14 13:37:11.383025 containerd[1649]: time="2026-01-14T13:37:11.382992355Z" level=info msg="StartContainer for \"f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5\"" Jan 14 13:37:11.400949 containerd[1649]: time="2026-01-14T13:37:11.400665041Z" level=info msg="connecting to shim f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5" address="unix:///run/containerd/s/caaa8c7b08ce88e535bdb6a6b0bca171157400af0d0de636c06d7a431e617e1d" protocol=ttrpc version=3 Jan 14 13:37:11.583540 systemd[1]: Started cri-containerd-f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5.scope - libcontainer container f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5. Jan 14 13:37:11.639777 containerd[1649]: time="2026-01-14T13:37:11.639707627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,}" Jan 14 13:37:11.740000 audit: BPF prog-id=176 op=LOAD Jan 14 13:37:11.748759 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 13:37:11.748874 kernel: audit: type=1334 audit(1768397831.740:579): prog-id=176 op=LOAD Jan 14 13:37:11.740000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.760603 kernel: audit: type=1300 audit(1768397831.740:579): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c488 a2=98 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.764387 containerd[1649]: time="2026-01-14T13:37:11.764239229Z" level=error msg="Failed to destroy network for sandbox \"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:11.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.767077 containerd[1649]: time="2026-01-14T13:37:11.766972816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:11.768051 kubelet[2966]: E0114 13:37:11.767898 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 13:37:11.768858 kubelet[2966]: E0114 13:37:11.768784 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:37:11.769075 kubelet[2966]: E0114 13:37:11.769027 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wtc4j" Jan 14 13:37:11.769437 kubelet[2966]: E0114 13:37:11.769347 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3fdec3a159633249db7e613ac706218b7d145f4914f09ff7129abb36cfe3f3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:11.769764 kernel: audit: type=1327 audit(1768397831.740:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.749000 audit: BPF prog-id=177 op=LOAD Jan 14 13:37:11.749000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.775438 kernel: audit: type=1334 audit(1768397831.749:580): prog-id=177 op=LOAD Jan 14 13:37:11.775517 kernel: audit: type=1300 audit(1768397831.749:580): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00019c218 a2=98 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.788589 kernel: audit: type=1327 audit(1768397831.749:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.788708 kernel: audit: type=1334 audit(1768397831.749:581): prog-id=177 op=UNLOAD Jan 14 13:37:11.788765 kernel: audit: type=1300 audit(1768397831.749:581): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.749000 audit: BPF prog-id=177 op=UNLOAD Jan 14 13:37:11.749000 audit[3954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.804313 kernel: audit: type=1327 audit(1768397831.749:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.751000 audit: BPF prog-id=176 op=UNLOAD Jan 14 13:37:11.812360 kernel: audit: type=1334 audit(1768397831.751:582): prog-id=176 op=UNLOAD Jan 14 13:37:11.751000 audit[3954]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.751000 audit: BPF prog-id=178 op=LOAD Jan 14 13:37:11.751000 audit[3954]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00019c6e8 a2=98 a3=0 items=0 ppid=3422 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:11.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635383766303432613438323961333038333635306339323333326262 Jan 14 13:37:11.860020 containerd[1649]: time="2026-01-14T13:37:11.859528755Z" level=info msg="StartContainer for \"f587f042a4829a3083650c92332bb1cbfc534da0b79945549a6381cb89bbbdb5\" returns successfully" Jan 14 13:37:12.044475 kubelet[2966]: I0114 13:37:12.044020 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q9mmc" podStartSLOduration=1.386041247 podStartE2EDuration="27.043974035s" podCreationTimestamp="2026-01-14 13:36:45 +0000 UTC" firstStartedPulling="2026-01-14 13:36:45.594950213 +0000 UTC m=+25.169023787" lastFinishedPulling="2026-01-14 13:37:11.252883 +0000 UTC m=+50.826956575" observedRunningTime="2026-01-14 13:37:12.035082141 +0000 UTC m=+51.609155734" watchObservedRunningTime="2026-01-14 13:37:12.043974035 +0000 UTC m=+51.618047649" Jan 14 13:37:12.297488 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 13:37:12.298286 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 13:37:12.613678 kubelet[2966]: I0114 13:37:12.613089 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-ca-bundle\") pod \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " Jan 14 13:37:12.614615 kubelet[2966]: I0114 13:37:12.614133 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwfqx\" (UniqueName: \"kubernetes.io/projected/63b80aee-dca1-4c6e-95c1-ebc4781e5795-kube-api-access-wwfqx\") pod \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " Jan 14 13:37:12.614615 kubelet[2966]: I0114 13:37:12.614204 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-backend-key-pair\") pod \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\" (UID: \"63b80aee-dca1-4c6e-95c1-ebc4781e5795\") " Jan 14 13:37:12.614615 kubelet[2966]: I0114 13:37:12.614941 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "63b80aee-dca1-4c6e-95c1-ebc4781e5795" (UID: "63b80aee-dca1-4c6e-95c1-ebc4781e5795"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 13:37:12.636854 kubelet[2966]: I0114 13:37:12.636786 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "63b80aee-dca1-4c6e-95c1-ebc4781e5795" (UID: "63b80aee-dca1-4c6e-95c1-ebc4781e5795"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 13:37:12.638754 systemd[1]: var-lib-kubelet-pods-63b80aee\x2ddca1\x2d4c6e\x2d95c1\x2debc4781e5795-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 13:37:12.644588 kubelet[2966]: I0114 13:37:12.643208 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b80aee-dca1-4c6e-95c1-ebc4781e5795-kube-api-access-wwfqx" (OuterVolumeSpecName: "kube-api-access-wwfqx") pod "63b80aee-dca1-4c6e-95c1-ebc4781e5795" (UID: "63b80aee-dca1-4c6e-95c1-ebc4781e5795"). InnerVolumeSpecName "kube-api-access-wwfqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 13:37:12.646408 systemd[1]: var-lib-kubelet-pods-63b80aee\x2ddca1\x2d4c6e\x2d95c1\x2debc4781e5795-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwwfqx.mount: Deactivated successfully. Jan 14 13:37:12.649926 containerd[1649]: time="2026-01-14T13:37:12.649863082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7v2t7,Uid:1584f8ba-fd2c-4903-be8f-c6577809742f,Namespace:calico-system,Attempt:0,}" Jan 14 13:37:12.683339 systemd[1]: Removed slice kubepods-besteffort-pod63b80aee_dca1_4c6e_95c1_ebc4781e5795.slice - libcontainer container kubepods-besteffort-pod63b80aee_dca1_4c6e_95c1_ebc4781e5795.slice. Jan 14 13:37:12.720613 kubelet[2966]: I0114 13:37:12.718139 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-backend-key-pair\") on node \"srv-414dr.gb1.brightbox.com\" DevicePath \"\"" Jan 14 13:37:12.720613 kubelet[2966]: I0114 13:37:12.720616 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63b80aee-dca1-4c6e-95c1-ebc4781e5795-whisker-ca-bundle\") on node \"srv-414dr.gb1.brightbox.com\" DevicePath \"\"" Jan 14 13:37:12.720986 kubelet[2966]: I0114 13:37:12.720635 2966 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wwfqx\" (UniqueName: \"kubernetes.io/projected/63b80aee-dca1-4c6e-95c1-ebc4781e5795-kube-api-access-wwfqx\") on node \"srv-414dr.gb1.brightbox.com\" DevicePath \"\"" Jan 14 13:37:13.186895 systemd[1]: Created slice kubepods-besteffort-podcd503a8a_4839_474d_9ad1_19916832d0a7.slice - libcontainer container kubepods-besteffort-podcd503a8a_4839_474d_9ad1_19916832d0a7.slice. Jan 14 13:37:13.225150 kubelet[2966]: I0114 13:37:13.225095 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqp5\" (UniqueName: \"kubernetes.io/projected/cd503a8a-4839-474d-9ad1-19916832d0a7-kube-api-access-qnqp5\") pod \"whisker-79b478485b-99m4n\" (UID: \"cd503a8a-4839-474d-9ad1-19916832d0a7\") " pod="calico-system/whisker-79b478485b-99m4n" Jan 14 13:37:13.228071 kubelet[2966]: I0114 13:37:13.226069 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd503a8a-4839-474d-9ad1-19916832d0a7-whisker-ca-bundle\") pod \"whisker-79b478485b-99m4n\" (UID: \"cd503a8a-4839-474d-9ad1-19916832d0a7\") " pod="calico-system/whisker-79b478485b-99m4n" Jan 14 13:37:13.228071 kubelet[2966]: I0114 13:37:13.227956 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cd503a8a-4839-474d-9ad1-19916832d0a7-whisker-backend-key-pair\") pod \"whisker-79b478485b-99m4n\" (UID: \"cd503a8a-4839-474d-9ad1-19916832d0a7\") " pod="calico-system/whisker-79b478485b-99m4n" Jan 14 13:37:13.266924 systemd-networkd[1555]: caliea5123a2690: Link UP Jan 14 13:37:13.270045 systemd-networkd[1555]: caliea5123a2690: Gained carrier Jan 14 13:37:13.318293 containerd[1649]: 2026-01-14 13:37:12.736 [INFO][4076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:37:13.318293 containerd[1649]: 2026-01-14 13:37:12.800 [INFO][4076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0 csi-node-driver- calico-system 1584f8ba-fd2c-4903-be8f-c6577809742f 693 0 2026-01-14 13:36:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com csi-node-driver-7v2t7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliea5123a2690 [] [] }} ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-" Jan 14 13:37:13.318293 containerd[1649]: 2026-01-14 13:37:12.801 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.318293 containerd[1649]: 2026-01-14 13:37:13.048 [INFO][4087] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" HandleID="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Workload="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.052 [INFO][4087] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" HandleID="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Workload="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"csi-node-driver-7v2t7", "timestamp":"2026-01-14 13:37:13.048640624 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.052 [INFO][4087] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.053 [INFO][4087] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.054 [INFO][4087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.124 [INFO][4087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.140 [INFO][4087] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.154 [INFO][4087] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.157 [INFO][4087] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320203 containerd[1649]: 2026-01-14 13:37:13.163 [INFO][4087] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.176 [INFO][4087] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.193 [INFO][4087] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.193 [INFO][4087] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-414dr.gb1.brightbox.com" subnet=192.168.127.128/26 Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.205 [INFO][4087] ipam/ipam_block_reader_writer.go 267: Successfully created block Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.205 [INFO][4087] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-414dr.gb1.brightbox.com" subnet=192.168.127.128/26 Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.209 [INFO][4087] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-414dr.gb1.brightbox.com" subnet=192.168.127.128/26 Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.209 [INFO][4087] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.211 [INFO][4087] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.219 [INFO][4087] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.227 [INFO][4087] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.128/26] block=192.168.127.128/26 handle="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.320656 containerd[1649]: 2026-01-14 13:37:13.227 [INFO][4087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.128/26] handle="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.321125 containerd[1649]: 2026-01-14 13:37:13.227 [INFO][4087] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:13.321125 containerd[1649]: 2026-01-14 13:37:13.227 [INFO][4087] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.128/26] IPv6=[] ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" HandleID="k8s-pod-network.c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Workload="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.233 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1584f8ba-fd2c-4903-be8f-c6577809742f", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-7v2t7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea5123a2690", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.233 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.128/32] ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.233 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea5123a2690 ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.272 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.275 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1584f8ba-fd2c-4903-be8f-c6577809742f", ResourceVersion:"693", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc", Pod:"csi-node-driver-7v2t7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.127.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliea5123a2690", MAC:"16:b0:c7:9b:bd:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:13.321237 containerd[1649]: 2026-01-14 13:37:13.313 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" Namespace="calico-system" Pod="csi-node-driver-7v2t7" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-csi--node--driver--7v2t7-eth0" Jan 14 13:37:13.496796 containerd[1649]: time="2026-01-14T13:37:13.496542732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b478485b-99m4n,Uid:cd503a8a-4839-474d-9ad1-19916832d0a7,Namespace:calico-system,Attempt:0,}" Jan 14 13:37:13.531998 containerd[1649]: time="2026-01-14T13:37:13.531940352Z" level=info msg="connecting to shim c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc" address="unix:///run/containerd/s/3bc05f16a989a4465b5f67dd2c717d3ab3b601484766b9b6d012c0df0a2382f3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:13.591849 systemd[1]: Started cri-containerd-c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc.scope - libcontainer container c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc. Jan 14 13:37:13.617000 audit: BPF prog-id=179 op=LOAD Jan 14 13:37:13.618000 audit: BPF prog-id=180 op=LOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.618000 audit: BPF prog-id=180 op=UNLOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.618000 audit: BPF prog-id=181 op=LOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.618000 audit: BPF prog-id=182 op=LOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.618000 audit: BPF prog-id=182 op=UNLOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.618000 audit: BPF prog-id=181 op=UNLOAD Jan 14 13:37:13.618000 audit[4176]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.619000 audit: BPF prog-id=183 op=LOAD Jan 14 13:37:13.619000 audit[4176]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:13.619000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6338366464336639653437343730333038636261623936373363613362 Jan 14 13:37:13.639975 containerd[1649]: time="2026-01-14T13:37:13.639910590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-42gh4,Uid:23681fa6-27ca-4d7d-86fa-c674a7318b4d,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:37:13.641202 containerd[1649]: time="2026-01-14T13:37:13.641128115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tndpb,Uid:f6b96f88-9628-4cd4-a240-d0aa195ee125,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:13.641371 containerd[1649]: time="2026-01-14T13:37:13.640475354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b65bb7cd-xwm2s,Uid:9af9fcdf-2905-4c21-b8a3-70ab543f6a40,Namespace:calico-system,Attempt:0,}" Jan 14 13:37:13.675788 containerd[1649]: time="2026-01-14T13:37:13.675741747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7v2t7,Uid:1584f8ba-fd2c-4903-be8f-c6577809742f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c86dd3f9e47470308cbab9673ca3b49d6e4ddc028fba4fb41179ff6dcb67b2bc\"" Jan 14 13:37:13.679728 containerd[1649]: time="2026-01-14T13:37:13.679698812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:37:13.767647 systemd-networkd[1555]: cali8adbf2b151c: Link UP Jan 14 13:37:13.772383 systemd-networkd[1555]: cali8adbf2b151c: Gained carrier Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.560 [INFO][4159] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.585 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0 whisker-79b478485b- calico-system cd503a8a-4839-474d-9ad1-19916832d0a7 895 0 2026-01-14 13:37:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79b478485b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com whisker-79b478485b-99m4n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8adbf2b151c [] [] }} ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.586 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.645 [INFO][4191] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" HandleID="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Workload="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.645 [INFO][4191] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" HandleID="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Workload="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"whisker-79b478485b-99m4n", "timestamp":"2026-01-14 13:37:13.645517344 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.645 [INFO][4191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.645 [INFO][4191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.645 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.667 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.687 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.703 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.706 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.711 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.711 [INFO][4191] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.713 [INFO][4191] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00 Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.731 [INFO][4191] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.746 [INFO][4191] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.130/26] block=192.168.127.128/26 handle="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.747 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.130/26] handle="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.747 [INFO][4191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:13.824928 containerd[1649]: 2026-01-14 13:37:13.747 [INFO][4191] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.130/26] IPv6=[] ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" HandleID="k8s-pod-network.49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Workload="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.758 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0", GenerateName:"whisker-79b478485b-", Namespace:"calico-system", SelfLink:"", UID:"cd503a8a-4839-474d-9ad1-19916832d0a7", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b478485b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"whisker-79b478485b-99m4n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8adbf2b151c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.759 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.130/32] ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.759 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8adbf2b151c ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.776 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.781 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0", GenerateName:"whisker-79b478485b-", Namespace:"calico-system", SelfLink:"", UID:"cd503a8a-4839-474d-9ad1-19916832d0a7", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 37, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79b478485b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00", Pod:"whisker-79b478485b-99m4n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.127.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8adbf2b151c", MAC:"3e:f7:ca:8b:70:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:13.826051 containerd[1649]: 2026-01-14 13:37:13.815 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" Namespace="calico-system" Pod="whisker-79b478485b-99m4n" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-whisker--79b478485b--99m4n-eth0" Jan 14 13:37:13.922895 containerd[1649]: time="2026-01-14T13:37:13.922826641Z" level=info msg="connecting to shim 49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00" address="unix:///run/containerd/s/57739cda1da130d33a60a4d968c214afdb3d96f79f81c681c8590013a9242b33" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:13.997895 systemd[1]: Started cri-containerd-49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00.scope - libcontainer container 49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00. Jan 14 13:37:14.027914 systemd-networkd[1555]: cali9199088eb6b: Link UP Jan 14 13:37:14.031632 systemd-networkd[1555]: cali9199088eb6b: Gained carrier Jan 14 13:37:14.056000 audit: BPF prog-id=184 op=LOAD Jan 14 13:37:14.058000 audit: BPF prog-id=185 op=LOAD Jan 14 13:37:14.058000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.061000 audit: BPF prog-id=185 op=UNLOAD Jan 14 13:37:14.061000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.064000 audit: BPF prog-id=186 op=LOAD Jan 14 13:37:14.064000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.064000 audit: BPF prog-id=187 op=LOAD Jan 14 13:37:14.064000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.064000 audit: BPF prog-id=187 op=UNLOAD Jan 14 13:37:14.064000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.064000 audit: BPF prog-id=186 op=UNLOAD Jan 14 13:37:14.064000 audit[4295]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.064000 audit: BPF prog-id=188 op=LOAD Jan 14 13:37:14.064000 audit[4295]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4283 pid=4295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.064000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439616238313637616139346334346533303235323036653861653163 Jan 14 13:37:14.109545 containerd[1649]: time="2026-01-14T13:37:14.109491379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:14.119380 containerd[1649]: time="2026-01-14T13:37:14.119301334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:37:14.119729 containerd[1649]: time="2026-01-14T13:37:14.119435041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:14.125707 kubelet[2966]: E0114 13:37:14.125659 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:14.126277 kubelet[2966]: E0114 13:37:14.125959 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:14.135079 kubelet[2966]: E0114 13:37:14.134617 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.764 [INFO][4224] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.808 [INFO][4224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0 calico-kube-controllers-b65bb7cd- calico-system 9af9fcdf-2905-4c21-b8a3-70ab543f6a40 814 0 2026-01-14 13:36:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b65bb7cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com calico-kube-controllers-b65bb7cd-xwm2s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9199088eb6b [] [] }} ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.808 [INFO][4224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.942 [INFO][4258] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" HandleID="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.943 [INFO][4258] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" HandleID="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7470), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"calico-kube-controllers-b65bb7cd-xwm2s", "timestamp":"2026-01-14 13:37:13.942857181 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.943 [INFO][4258] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.943 [INFO][4258] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.943 [INFO][4258] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.953 [INFO][4258] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.961 [INFO][4258] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.968 [INFO][4258] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.976 [INFO][4258] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.981 [INFO][4258] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.981 [INFO][4258] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.985 [INFO][4258] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5 Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:13.993 [INFO][4258] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:14.006 [INFO][4258] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.131/26] block=192.168.127.128/26 handle="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:14.009 [INFO][4258] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.131/26] handle="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.150686 containerd[1649]: 2026-01-14 13:37:14.010 [INFO][4258] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:14.154523 containerd[1649]: 2026-01-14 13:37:14.010 [INFO][4258] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.131/26] IPv6=[] ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" HandleID="k8s-pod-network.6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.154523 containerd[1649]: 2026-01-14 13:37:14.020 [INFO][4224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0", GenerateName:"calico-kube-controllers-b65bb7cd-", Namespace:"calico-system", SelfLink:"", UID:"9af9fcdf-2905-4c21-b8a3-70ab543f6a40", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b65bb7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-b65bb7cd-xwm2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9199088eb6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.154523 containerd[1649]: 2026-01-14 13:37:14.020 [INFO][4224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.131/32] ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.154523 containerd[1649]: 2026-01-14 13:37:14.020 [INFO][4224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9199088eb6b ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.154523 containerd[1649]: 2026-01-14 13:37:14.038 [INFO][4224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.155209 containerd[1649]: 2026-01-14 13:37:14.038 [INFO][4224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0", GenerateName:"calico-kube-controllers-b65bb7cd-", Namespace:"calico-system", SelfLink:"", UID:"9af9fcdf-2905-4c21-b8a3-70ab543f6a40", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b65bb7cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5", Pod:"calico-kube-controllers-b65bb7cd-xwm2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.127.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9199088eb6b", MAC:"a6:9d:e6:1b:86:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.155209 containerd[1649]: 2026-01-14 13:37:14.128 [INFO][4224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" Namespace="calico-system" Pod="calico-kube-controllers-b65bb7cd-xwm2s" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--kube--controllers--b65bb7cd--xwm2s-eth0" Jan 14 13:37:14.164956 containerd[1649]: time="2026-01-14T13:37:14.164668021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:37:14.175361 systemd-networkd[1555]: cali1c8939eddbe: Link UP Jan 14 13:37:14.180806 systemd-networkd[1555]: cali1c8939eddbe: Gained carrier Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.793 [INFO][4217] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.839 [INFO][4217] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0 coredns-668d6bf9bc- kube-system f6b96f88-9628-4cd4-a240-d0aa195ee125 808 0 2026-01-14 13:36:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com coredns-668d6bf9bc-tndpb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c8939eddbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.839 [INFO][4217] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.946 [INFO][4265] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" HandleID="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.947 [INFO][4265] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" HandleID="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003be4a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-tndpb", "timestamp":"2026-01-14 13:37:13.946776408 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:13.947 [INFO][4265] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.011 [INFO][4265] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.013 [INFO][4265] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.057 [INFO][4265] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.076 [INFO][4265] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.086 [INFO][4265] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.088 [INFO][4265] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.094 [INFO][4265] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.094 [INFO][4265] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.097 [INFO][4265] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3 Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.110 [INFO][4265] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.139 [INFO][4265] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.132/26] block=192.168.127.128/26 handle="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.145 [INFO][4265] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.132/26] handle="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.231456 containerd[1649]: 2026-01-14 13:37:14.145 [INFO][4265] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:14.235407 containerd[1649]: 2026-01-14 13:37:14.145 [INFO][4265] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.132/26] IPv6=[] ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" HandleID="k8s-pod-network.2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.235407 containerd[1649]: 2026-01-14 13:37:14.157 [INFO][4217] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6b96f88-9628-4cd4-a240-d0aa195ee125", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-tndpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c8939eddbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.235407 containerd[1649]: 2026-01-14 13:37:14.160 [INFO][4217] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.132/32] ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.235407 containerd[1649]: 2026-01-14 13:37:14.160 [INFO][4217] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c8939eddbe ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.235407 containerd[1649]: 2026-01-14 13:37:14.181 [INFO][4217] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.236194 containerd[1649]: 2026-01-14 13:37:14.187 [INFO][4217] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f6b96f88-9628-4cd4-a240-d0aa195ee125", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3", Pod:"coredns-668d6bf9bc-tndpb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c8939eddbe", MAC:"da:05:76:2e:c5:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.236194 containerd[1649]: 2026-01-14 13:37:14.208 [INFO][4217] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" Namespace="kube-system" Pod="coredns-668d6bf9bc-tndpb" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--tndpb-eth0" Jan 14 13:37:14.278246 containerd[1649]: time="2026-01-14T13:37:14.277948535Z" level=info msg="connecting to shim 6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5" address="unix:///run/containerd/s/652e58d7a804a9b9cc6b39aad9add9254551d4a23265d8018b15201938568096" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:14.295163 containerd[1649]: time="2026-01-14T13:37:14.294995248Z" level=info msg="connecting to shim 2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3" address="unix:///run/containerd/s/31eed01f7246bfb7c33b4d5afb1c9aa4eeb28ab1f4ae008e5b44609cb431d7bf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:14.300992 containerd[1649]: time="2026-01-14T13:37:14.300964407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79b478485b-99m4n,Uid:cd503a8a-4839-474d-9ad1-19916832d0a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"49ab8167aa94c44e3025206e8ae1c7bbcd028e6a41a8740ab3058c0fe8683d00\"" Jan 14 13:37:14.325105 systemd-networkd[1555]: cali822306054f5: Link UP Jan 14 13:37:14.328155 systemd-networkd[1555]: cali822306054f5: Gained carrier Jan 14 13:37:14.382475 systemd[1]: Started cri-containerd-6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5.scope - libcontainer container 6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5. Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.842 [INFO][4212] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.874 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0 calico-apiserver-556cb8cff8- calico-apiserver 23681fa6-27ca-4d7d-86fa-c674a7318b4d 816 0 2026-01-14 13:36:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556cb8cff8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com calico-apiserver-556cb8cff8-42gh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali822306054f5 [] [] }} ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.874 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.996 [INFO][4274] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" HandleID="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.997 [INFO][4274] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" HandleID="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043c360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-414dr.gb1.brightbox.com", "pod":"calico-apiserver-556cb8cff8-42gh4", "timestamp":"2026-01-14 13:37:13.996262848 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:13.997 [INFO][4274] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.145 [INFO][4274] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.146 [INFO][4274] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.188 [INFO][4274] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.221 [INFO][4274] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.238 [INFO][4274] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.244 [INFO][4274] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.250 [INFO][4274] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.250 [INFO][4274] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.254 [INFO][4274] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825 Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.265 [INFO][4274] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.294 [INFO][4274] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.133/26] block=192.168.127.128/26 handle="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.296 [INFO][4274] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.133/26] handle="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:14.397093 containerd[1649]: 2026-01-14 13:37:14.297 [INFO][4274] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:14.402242 containerd[1649]: 2026-01-14 13:37:14.297 [INFO][4274] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.133/26] IPv6=[] ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" HandleID="k8s-pod-network.cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.402242 containerd[1649]: 2026-01-14 13:37:14.305 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0", GenerateName:"calico-apiserver-556cb8cff8-", Namespace:"calico-apiserver", SelfLink:"", UID:"23681fa6-27ca-4d7d-86fa-c674a7318b4d", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556cb8cff8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-556cb8cff8-42gh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali822306054f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.402242 containerd[1649]: 2026-01-14 13:37:14.306 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.133/32] ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.402242 containerd[1649]: 2026-01-14 13:37:14.306 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali822306054f5 ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.402242 containerd[1649]: 2026-01-14 13:37:14.343 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.404517 containerd[1649]: 2026-01-14 13:37:14.344 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0", GenerateName:"calico-apiserver-556cb8cff8-", Namespace:"calico-apiserver", SelfLink:"", UID:"23681fa6-27ca-4d7d-86fa-c674a7318b4d", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556cb8cff8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825", Pod:"calico-apiserver-556cb8cff8-42gh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali822306054f5", MAC:"1e:2e:3f:ed:fd:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:14.404517 containerd[1649]: 2026-01-14 13:37:14.388 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-42gh4" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--42gh4-eth0" Jan 14 13:37:14.439000 audit: BPF prog-id=189 op=LOAD Jan 14 13:37:14.440000 audit: BPF prog-id=190 op=LOAD Jan 14 13:37:14.440000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.440000 audit: BPF prog-id=190 op=UNLOAD Jan 14 13:37:14.440000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.440000 audit: BPF prog-id=191 op=LOAD Jan 14 13:37:14.440000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.440000 audit: BPF prog-id=192 op=LOAD Jan 14 13:37:14.440000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.440000 audit: BPF prog-id=192 op=UNLOAD Jan 14 13:37:14.440000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.441000 audit: BPF prog-id=191 op=UNLOAD Jan 14 13:37:14.441000 audit[4399]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.441000 audit: BPF prog-id=193 op=LOAD Jan 14 13:37:14.441000 audit[4399]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4376 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3662643664373735326365653231306363376339383963323331326336 Jan 14 13:37:14.473955 systemd[1]: Started cri-containerd-2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3.scope - libcontainer container 2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3. Jan 14 13:37:14.497573 containerd[1649]: time="2026-01-14T13:37:14.497507077Z" level=info msg="connecting to shim cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825" address="unix:///run/containerd/s/82084a7a634119b85079d373ecaf938b7a926b15f2a54131139dad9c3eb00184" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:14.510000 audit: BPF prog-id=194 op=LOAD Jan 14 13:37:14.513000 audit: BPF prog-id=195 op=LOAD Jan 14 13:37:14.513000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.513000 audit: BPF prog-id=195 op=UNLOAD Jan 14 13:37:14.513000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.513000 audit: BPF prog-id=196 op=LOAD Jan 14 13:37:14.513000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.513000 audit: BPF prog-id=197 op=LOAD Jan 14 13:37:14.513000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.514000 audit: BPF prog-id=197 op=UNLOAD Jan 14 13:37:14.514000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.514000 audit: BPF prog-id=196 op=UNLOAD Jan 14 13:37:14.514000 audit[4420]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.514000 audit: BPF prog-id=198 op=LOAD Jan 14 13:37:14.514000 audit[4420]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4382 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265613064643561616531343435616563343262626137663239666266 Jan 14 13:37:14.519210 containerd[1649]: time="2026-01-14T13:37:14.513760343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:14.519210 containerd[1649]: time="2026-01-14T13:37:14.516327422Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:37:14.519210 containerd[1649]: time="2026-01-14T13:37:14.516422983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:14.522633 kubelet[2966]: E0114 13:37:14.518637 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:14.522633 kubelet[2966]: E0114 13:37:14.520918 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:14.525073 containerd[1649]: time="2026-01-14T13:37:14.521399626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:37:14.525172 kubelet[2966]: E0114 13:37:14.524760 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:14.526809 kubelet[2966]: E0114 13:37:14.526673 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:14.578891 systemd[1]: Started cri-containerd-cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825.scope - libcontainer container cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825. Jan 14 13:37:14.627336 containerd[1649]: time="2026-01-14T13:37:14.627017227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tndpb,Uid:f6b96f88-9628-4cd4-a240-d0aa195ee125,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3\"" Jan 14 13:37:14.635742 containerd[1649]: time="2026-01-14T13:37:14.635356951Z" level=info msg="CreateContainer within sandbox \"2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:37:14.654252 kubelet[2966]: I0114 13:37:14.654204 2966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b80aee-dca1-4c6e-95c1-ebc4781e5795" path="/var/lib/kubelet/pods/63b80aee-dca1-4c6e-95c1-ebc4781e5795/volumes" Jan 14 13:37:14.675682 containerd[1649]: time="2026-01-14T13:37:14.674823742Z" level=info msg="Container f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:37:14.694373 containerd[1649]: time="2026-01-14T13:37:14.693916211Z" level=info msg="CreateContainer within sandbox \"2ea0dd5aae1445aec42bba7f29fbfcc28a6dce5268609243fda7130d1b5cc6c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac\"" Jan 14 13:37:14.699636 containerd[1649]: time="2026-01-14T13:37:14.697530437Z" level=info msg="StartContainer for \"f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac\"" Jan 14 13:37:14.701046 containerd[1649]: time="2026-01-14T13:37:14.700969695Z" level=info msg="connecting to shim f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac" address="unix:///run/containerd/s/31eed01f7246bfb7c33b4d5afb1c9aa4eeb28ab1f4ae008e5b44609cb431d7bf" protocol=ttrpc version=3 Jan 14 13:37:14.747023 systemd[1]: Started cri-containerd-f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac.scope - libcontainer container f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac. Jan 14 13:37:14.756000 audit: BPF prog-id=199 op=LOAD Jan 14 13:37:14.758000 audit: BPF prog-id=200 op=LOAD Jan 14 13:37:14.758000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.759000 audit: BPF prog-id=200 op=UNLOAD Jan 14 13:37:14.759000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.760000 audit: BPF prog-id=201 op=LOAD Jan 14 13:37:14.760000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.760000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.761000 audit: BPF prog-id=202 op=LOAD Jan 14 13:37:14.761000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.761000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.762000 audit: BPF prog-id=202 op=UNLOAD Jan 14 13:37:14.762000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.762000 audit: BPF prog-id=201 op=UNLOAD Jan 14 13:37:14.762000 audit[4481]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.762000 audit: BPF prog-id=203 op=LOAD Jan 14 13:37:14.762000 audit[4481]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4467 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.762000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364303964356562623934376330643733336161356132376664383833 Jan 14 13:37:14.785000 audit: BPF prog-id=204 op=LOAD Jan 14 13:37:14.789000 audit: BPF prog-id=205 op=LOAD Jan 14 13:37:14.789000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.789000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.790000 audit: BPF prog-id=205 op=UNLOAD Jan 14 13:37:14.790000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.792000 audit: BPF prog-id=206 op=LOAD Jan 14 13:37:14.792000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.792000 audit: BPF prog-id=207 op=LOAD Jan 14 13:37:14.792000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.792000 audit: BPF prog-id=207 op=UNLOAD Jan 14 13:37:14.792000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.792000 audit: BPF prog-id=206 op=UNLOAD Jan 14 13:37:14.792000 audit[4524]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.792000 audit: BPF prog-id=208 op=LOAD Jan 14 13:37:14.792000 audit[4524]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4382 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:14.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633653733633735356630376137363536346333323137366438656235 Jan 14 13:37:14.845475 containerd[1649]: time="2026-01-14T13:37:14.845327421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b65bb7cd-xwm2s,Uid:9af9fcdf-2905-4c21-b8a3-70ab543f6a40,Namespace:calico-system,Attempt:0,} returns sandbox id \"6bd6d7752cee210cc7c989c2312c6181662511014b200133d323e8b8997016e5\"" Jan 14 13:37:14.856105 containerd[1649]: time="2026-01-14T13:37:14.856065382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:14.857839 containerd[1649]: time="2026-01-14T13:37:14.857733428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:37:14.857839 containerd[1649]: time="2026-01-14T13:37:14.857814404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:14.858555 kubelet[2966]: E0114 13:37:14.858105 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:14.858555 kubelet[2966]: E0114 13:37:14.858399 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:14.859462 kubelet[2966]: E0114 13:37:14.859391 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29af2dd3753940a0809dbeff5a3da59e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:14.860503 containerd[1649]: time="2026-01-14T13:37:14.860269022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:37:14.869397 containerd[1649]: time="2026-01-14T13:37:14.869358004Z" level=info msg="StartContainer for \"f3e73c755f07a76564c32176d8eb5841b82e3fa6c6edaa75ffb3326504de88ac\" returns successfully" Jan 14 13:37:14.920628 containerd[1649]: time="2026-01-14T13:37:14.919610981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-42gh4,Uid:23681fa6-27ca-4d7d-86fa-c674a7318b4d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cd09d5ebb947c0d733aa5a27fd88322ae45c5ab215b644e552cdea5909522825\"" Jan 14 13:37:15.005850 systemd-networkd[1555]: caliea5123a2690: Gained IPv6LL Jan 14 13:37:15.102944 kubelet[2966]: E0114 13:37:15.102024 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:15.140287 kubelet[2966]: I0114 13:37:15.140217 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tndpb" podStartSLOduration=49.140185199 podStartE2EDuration="49.140185199s" podCreationTimestamp="2026-01-14 13:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:37:15.138547944 +0000 UTC m=+54.712621545" watchObservedRunningTime="2026-01-14 13:37:15.140185199 +0000 UTC m=+54.714258793" Jan 14 13:37:15.195754 containerd[1649]: time="2026-01-14T13:37:15.195342459Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:15.198101 containerd[1649]: time="2026-01-14T13:37:15.197839488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:37:15.198493 containerd[1649]: time="2026-01-14T13:37:15.198263980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:15.216384 kubelet[2966]: E0114 13:37:15.216315 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:15.217906 kubelet[2966]: E0114 13:37:15.217368 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:15.230535 kubelet[2966]: E0114 13:37:15.229871 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:15.234875 containerd[1649]: time="2026-01-14T13:37:15.233865107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:37:15.234997 kubelet[2966]: E0114 13:37:15.234022 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:37:15.267000 audit[4627]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:15.267000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffca9c02790 a2=0 a3=7ffca9c0277c items=0 ppid=3099 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:15.272000 audit[4627]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:15.272000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffca9c02790 a2=0 a3=0 items=0 ppid=3099 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:15.325886 systemd-networkd[1555]: cali9199088eb6b: Gained IPv6LL Jan 14 13:37:15.343313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604664366.mount: Deactivated successfully. Jan 14 13:37:15.517757 systemd-networkd[1555]: cali822306054f5: Gained IPv6LL Jan 14 13:37:15.573963 containerd[1649]: time="2026-01-14T13:37:15.573885199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:15.575306 containerd[1649]: time="2026-01-14T13:37:15.574743156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:37:15.575306 containerd[1649]: time="2026-01-14T13:37:15.574878408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:15.575418 kubelet[2966]: E0114 13:37:15.575100 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:15.575418 kubelet[2966]: E0114 13:37:15.575193 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:15.577366 kubelet[2966]: E0114 13:37:15.575487 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:15.577366 kubelet[2966]: E0114 13:37:15.576636 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:37:15.578907 containerd[1649]: time="2026-01-14T13:37:15.576741430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:37:15.582155 systemd-networkd[1555]: cali1c8939eddbe: Gained IPv6LL Jan 14 13:37:15.735000 audit: BPF prog-id=209 op=LOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd87b9420 a2=98 a3=1fffffffffffffff items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.735000 audit: BPF prog-id=209 op=UNLOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffd87b93f0 a3=0 items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.735000 audit: BPF prog-id=210 op=LOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd87b9300 a2=94 a3=3 items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.735000 audit: BPF prog-id=210 op=UNLOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd87b9300 a2=94 a3=3 items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.735000 audit: BPF prog-id=211 op=LOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffd87b9340 a2=94 a3=7fffd87b9520 items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.735000 audit: BPF prog-id=211 op=UNLOAD Jan 14 13:37:15.735000 audit[4663]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffd87b9340 a2=94 a3=7fffd87b9520 items=0 ppid=4547 pid=4663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.735000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 13:37:15.739000 audit: BPF prog-id=212 op=LOAD Jan 14 13:37:15.739000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffde23d7760 a2=98 a3=3 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.739000 audit: BPF prog-id=212 op=UNLOAD Jan 14 13:37:15.739000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffde23d7730 a3=0 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.741000 audit: BPF prog-id=213 op=LOAD Jan 14 13:37:15.741000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde23d7550 a2=94 a3=54428f items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.741000 audit: BPF prog-id=213 op=UNLOAD Jan 14 13:37:15.741000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde23d7550 a2=94 a3=54428f items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.741000 audit: BPF prog-id=214 op=LOAD Jan 14 13:37:15.741000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde23d7580 a2=94 a3=2 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.741000 audit: BPF prog-id=214 op=UNLOAD Jan 14 13:37:15.741000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde23d7580 a2=0 a3=2 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.837810 systemd-networkd[1555]: cali8adbf2b151c: Gained IPv6LL Jan 14 13:37:15.888446 containerd[1649]: time="2026-01-14T13:37:15.888360253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:15.889594 containerd[1649]: time="2026-01-14T13:37:15.889489055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:37:15.889847 containerd[1649]: time="2026-01-14T13:37:15.889554722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:15.889970 kubelet[2966]: E0114 13:37:15.889835 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:15.889970 kubelet[2966]: E0114 13:37:15.889897 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:15.890316 kubelet[2966]: E0114 13:37:15.890076 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6v8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:15.892003 kubelet[2966]: E0114 13:37:15.891967 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:37:15.976000 audit: BPF prog-id=215 op=LOAD Jan 14 13:37:15.976000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffde23d7440 a2=94 a3=1 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.976000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.976000 audit: BPF prog-id=215 op=UNLOAD Jan 14 13:37:15.976000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffde23d7440 a2=94 a3=1 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.976000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.992000 audit: BPF prog-id=216 op=LOAD Jan 14 13:37:15.992000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde23d7430 a2=94 a3=4 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.992000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.992000 audit: BPF prog-id=216 op=UNLOAD Jan 14 13:37:15.992000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde23d7430 a2=0 a3=4 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.992000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.993000 audit: BPF prog-id=217 op=LOAD Jan 14 13:37:15.993000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffde23d7290 a2=94 a3=5 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.993000 audit: BPF prog-id=217 op=UNLOAD Jan 14 13:37:15.993000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffde23d7290 a2=0 a3=5 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.993000 audit: BPF prog-id=218 op=LOAD Jan 14 13:37:15.993000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde23d74b0 a2=94 a3=6 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.993000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.994000 audit: BPF prog-id=218 op=UNLOAD Jan 14 13:37:15.994000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffde23d74b0 a2=0 a3=6 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.994000 audit: BPF prog-id=219 op=LOAD Jan 14 13:37:15.994000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffde23d6c60 a2=94 a3=88 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.994000 audit: BPF prog-id=220 op=LOAD Jan 14 13:37:15.994000 audit[4664]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffde23d6ae0 a2=94 a3=2 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.994000 audit: BPF prog-id=220 op=UNLOAD Jan 14 13:37:15.994000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffde23d6b10 a2=0 a3=7ffde23d6c10 items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:15.995000 audit: BPF prog-id=219 op=UNLOAD Jan 14 13:37:15.995000 audit[4664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3a5fbd10 a2=0 a3=135c7f05249d510a items=0 ppid=4547 pid=4664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:15.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 13:37:16.021000 audit: BPF prog-id=221 op=LOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb17111c0 a2=98 a3=1999999999999999 items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.021000 audit: BPF prog-id=221 op=UNLOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fffb1711190 a3=0 items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.021000 audit: BPF prog-id=222 op=LOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb17110a0 a2=94 a3=ffff items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.021000 audit: BPF prog-id=222 op=UNLOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb17110a0 a2=94 a3=ffff items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.021000 audit: BPF prog-id=223 op=LOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb17110e0 a2=94 a3=7fffb17112c0 items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.021000 audit: BPF prog-id=223 op=UNLOAD Jan 14 13:37:16.021000 audit[4669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fffb17110e0 a2=94 a3=7fffb17112c0 items=0 ppid=4547 pid=4669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.021000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 13:37:16.099476 kubelet[2966]: E0114 13:37:16.098485 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:37:16.102631 kubelet[2966]: E0114 13:37:16.100787 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:37:16.105613 kubelet[2966]: E0114 13:37:16.105330 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:37:16.174945 systemd-networkd[1555]: vxlan.calico: Link UP Jan 14 13:37:16.175234 systemd-networkd[1555]: vxlan.calico: Gained carrier Jan 14 13:37:16.245000 audit[4688]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:16.245000 audit[4688]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef6625420 a2=0 a3=7ffef662540c items=0 ppid=3099 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:16.250000 audit[4688]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:16.250000 audit[4688]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffef6625420 a2=0 a3=0 items=0 ppid=3099 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:16.265000 audit: BPF prog-id=224 op=LOAD Jan 14 13:37:16.265000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5fbb500 a2=98 a3=20 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.265000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.266000 audit: BPF prog-id=224 op=UNLOAD Jan 14 13:37:16.266000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffeb5fbb4d0 a3=0 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.266000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=225 op=LOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5fbb310 a2=94 a3=54428f items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=225 op=UNLOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb5fbb310 a2=94 a3=54428f items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=226 op=LOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb5fbb340 a2=94 a3=2 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=226 op=UNLOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffeb5fbb340 a2=0 a3=2 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=227 op=LOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5fbb0f0 a2=94 a3=4 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=227 op=UNLOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb5fbb0f0 a2=94 a3=4 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=228 op=LOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5fbb1f0 a2=94 a3=7ffeb5fbb370 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.267000 audit: BPF prog-id=228 op=UNLOAD Jan 14 13:37:16.267000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb5fbb1f0 a2=0 a3=7ffeb5fbb370 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.267000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.270000 audit[4696]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:16.270000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde7854e90 a2=0 a3=7ffde7854e7c items=0 ppid=3099 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:16.275000 audit: BPF prog-id=229 op=LOAD Jan 14 13:37:16.275000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5fba920 a2=94 a3=2 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.275000 audit: BPF prog-id=229 op=UNLOAD Jan 14 13:37:16.275000 audit[4697]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffeb5fba920 a2=0 a3=2 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.275000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.277000 audit: BPF prog-id=230 op=LOAD Jan 14 13:37:16.277000 audit[4697]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb5fbaa20 a2=94 a3=30 items=0 ppid=4547 pid=4697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 13:37:16.277000 audit[4696]: NETFILTER_CFG table=nat:126 family=2 entries=35 op=nft_register_chain pid=4696 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:16.277000 audit[4696]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffde7854e90 a2=0 a3=7ffde7854e7c items=0 ppid=3099 pid=4696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:16.296000 audit: BPF prog-id=231 op=LOAD Jan 14 13:37:16.296000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc73ac31e0 a2=98 a3=0 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.296000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.297000 audit: BPF prog-id=231 op=UNLOAD Jan 14 13:37:16.297000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc73ac31b0 a3=0 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.297000 audit: BPF prog-id=232 op=LOAD Jan 14 13:37:16.297000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc73ac2fd0 a2=94 a3=54428f items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.297000 audit: BPF prog-id=232 op=UNLOAD Jan 14 13:37:16.297000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc73ac2fd0 a2=94 a3=54428f items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.297000 audit: BPF prog-id=233 op=LOAD Jan 14 13:37:16.297000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc73ac3000 a2=94 a3=2 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.297000 audit: BPF prog-id=233 op=UNLOAD Jan 14 13:37:16.297000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc73ac3000 a2=0 a3=2 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.297000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.539000 audit: BPF prog-id=234 op=LOAD Jan 14 13:37:16.539000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc73ac2ec0 a2=94 a3=1 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.539000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.540000 audit: BPF prog-id=234 op=UNLOAD Jan 14 13:37:16.540000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffc73ac2ec0 a2=94 a3=1 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.540000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.554000 audit: BPF prog-id=235 op=LOAD Jan 14 13:37:16.554000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc73ac2eb0 a2=94 a3=4 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.554000 audit: BPF prog-id=235 op=UNLOAD Jan 14 13:37:16.554000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc73ac2eb0 a2=0 a3=4 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.554000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.555000 audit: BPF prog-id=236 op=LOAD Jan 14 13:37:16.555000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc73ac2d10 a2=94 a3=5 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.555000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.555000 audit: BPF prog-id=236 op=UNLOAD Jan 14 13:37:16.555000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffc73ac2d10 a2=0 a3=5 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.555000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.555000 audit: BPF prog-id=237 op=LOAD Jan 14 13:37:16.555000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc73ac2f30 a2=94 a3=6 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.555000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.555000 audit: BPF prog-id=237 op=UNLOAD Jan 14 13:37:16.555000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffc73ac2f30 a2=0 a3=6 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.555000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.556000 audit: BPF prog-id=238 op=LOAD Jan 14 13:37:16.556000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffc73ac26e0 a2=94 a3=88 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.556000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.556000 audit: BPF prog-id=239 op=LOAD Jan 14 13:37:16.556000 audit[4701]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffc73ac2560 a2=94 a3=2 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.556000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.557000 audit: BPF prog-id=239 op=UNLOAD Jan 14 13:37:16.557000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffc73ac2590 a2=0 a3=7ffc73ac2690 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.557000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.558000 audit: BPF prog-id=238 op=UNLOAD Jan 14 13:37:16.558000 audit[4701]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=2feed10 a2=0 a3=dc3bc089eacf09e2 items=0 ppid=4547 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.558000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 13:37:16.565000 audit: BPF prog-id=230 op=UNLOAD Jan 14 13:37:16.565000 audit[4547]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c000804ec0 a2=0 a3=0 items=0 ppid=4449 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.565000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 13:37:16.662000 audit[4737]: NETFILTER_CFG table=mangle:127 family=2 entries=16 op=nft_register_chain pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:16.662000 audit[4737]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff03eb2df0 a2=0 a3=7fff03eb2ddc items=0 ppid=4547 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:16.669000 audit[4736]: NETFILTER_CFG table=raw:128 family=2 entries=21 op=nft_register_chain pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:16.669000 audit[4736]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc0db50720 a2=0 a3=7ffc0db5070c items=0 ppid=4547 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.669000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:16.673000 audit[4741]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=4741 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:16.673000 audit[4741]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffddb91f810 a2=0 a3=7ffddb91f7fc items=0 ppid=4547 pid=4741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.673000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:16.683000 audit[4740]: NETFILTER_CFG table=filter:130 family=2 entries=232 op=nft_register_chain pid=4740 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:16.683000 audit[4740]: SYSCALL arch=c000003e syscall=46 success=yes exit=134280 a0=3 a1=7ffd8e51d200 a2=0 a3=7ffd8e51d1ec items=0 ppid=4547 pid=4740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:16.683000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:18.205930 systemd-networkd[1555]: vxlan.calico: Gained IPv6LL Jan 14 13:37:23.646437 containerd[1649]: time="2026-01-14T13:37:23.646383022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,}" Jan 14 13:37:23.809681 systemd-networkd[1555]: cali206ba7469ef: Link UP Jan 14 13:37:23.810072 systemd-networkd[1555]: cali206ba7469ef: Gained carrier Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.704 [INFO][4766] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0 coredns-668d6bf9bc- kube-system 7031cc44-2bc3-4894-9b05-65dab6c01c28 815 0 2026-01-14 13:36:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com coredns-668d6bf9bc-6s6kg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali206ba7469ef [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.704 [INFO][4766] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.747 [INFO][4777] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" HandleID="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.747 [INFO][4777] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" HandleID="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-6s6kg", "timestamp":"2026-01-14 13:37:23.747472692 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.748 [INFO][4777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.748 [INFO][4777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.748 [INFO][4777] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.758 [INFO][4777] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.764 [INFO][4777] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.770 [INFO][4777] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.773 [INFO][4777] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.777 [INFO][4777] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.777 [INFO][4777] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.780 [INFO][4777] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055 Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.787 [INFO][4777] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.797 [INFO][4777] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.134/26] block=192.168.127.128/26 handle="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.797 [INFO][4777] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.134/26] handle="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.797 [INFO][4777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:23.846705 containerd[1649]: 2026-01-14 13:37:23.797 [INFO][4777] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.134/26] IPv6=[] ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" HandleID="k8s-pod-network.97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Workload="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.847945 containerd[1649]: 2026-01-14 13:37:23.803 [INFO][4766] cni-plugin/k8s.go 418: Populated endpoint ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7031cc44-2bc3-4894-9b05-65dab6c01c28", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-6s6kg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206ba7469ef", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:23.847945 containerd[1649]: 2026-01-14 13:37:23.804 [INFO][4766] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.134/32] ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.847945 containerd[1649]: 2026-01-14 13:37:23.804 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali206ba7469ef ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.847945 containerd[1649]: 2026-01-14 13:37:23.808 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.849855 containerd[1649]: 2026-01-14 13:37:23.810 [INFO][4766] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7031cc44-2bc3-4894-9b05-65dab6c01c28", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055", Pod:"coredns-668d6bf9bc-6s6kg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.127.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali206ba7469ef", MAC:"5a:68:99:d5:0e:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:23.849855 containerd[1649]: 2026-01-14 13:37:23.838 [INFO][4766] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" Namespace="kube-system" Pod="coredns-668d6bf9bc-6s6kg" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-coredns--668d6bf9bc--6s6kg-eth0" Jan 14 13:37:23.918688 containerd[1649]: time="2026-01-14T13:37:23.917544282Z" level=info msg="connecting to shim 97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055" address="unix:///run/containerd/s/340d0dd7e339e3b92920072a0bbe07a5ea6d62799af3cc2247762af0c85ccbac" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:23.984843 systemd[1]: Started cri-containerd-97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055.scope - libcontainer container 97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055. Jan 14 13:37:24.045895 kernel: kauditd_printk_skb: 353 callbacks suppressed Jan 14 13:37:24.046063 kernel: audit: type=1334 audit(1768397844.040:704): prog-id=240 op=LOAD Jan 14 13:37:24.040000 audit: BPF prog-id=240 op=LOAD Jan 14 13:37:24.040000 audit: BPF prog-id=241 op=LOAD Jan 14 13:37:24.048628 kernel: audit: type=1334 audit(1768397844.040:705): prog-id=241 op=LOAD Jan 14 13:37:24.040000 audit[4811]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.062510 kernel: audit: type=1300 audit(1768397844.040:705): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.062600 kernel: audit: type=1327 audit(1768397844.040:705): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.040000 audit: BPF prog-id=241 op=UNLOAD Jan 14 13:37:24.040000 audit[4811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.072595 kernel: audit: type=1334 audit(1768397844.040:706): prog-id=241 op=UNLOAD Jan 14 13:37:24.072680 kernel: audit: type=1300 audit(1768397844.040:706): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.077777 kernel: audit: type=1327 audit(1768397844.040:706): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.043000 audit: BPF prog-id=242 op=LOAD Jan 14 13:37:24.082284 kernel: audit: type=1334 audit(1768397844.043:707): prog-id=242 op=LOAD Jan 14 13:37:24.082359 kernel: audit: type=1300 audit(1768397844.043:707): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit[4811]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.089219 kernel: audit: type=1327 audit(1768397844.043:707): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.043000 audit: BPF prog-id=243 op=LOAD Jan 14 13:37:24.043000 audit[4811]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.043000 audit: BPF prog-id=243 op=UNLOAD Jan 14 13:37:24.043000 audit[4811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.043000 audit: BPF prog-id=242 op=UNLOAD Jan 14 13:37:24.043000 audit[4811]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.043000 audit: BPF prog-id=244 op=LOAD Jan 14 13:37:24.043000 audit[4811]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4800 pid=4811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.043000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3937323636663437333866343561306335613635326132366636373130 Jan 14 13:37:24.067000 audit[4836]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4836 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:24.067000 audit[4836]: SYSCALL arch=c000003e syscall=46 success=yes exit=21516 a0=3 a1=7fff86cb4580 a2=0 a3=7fff86cb456c items=0 ppid=4547 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.067000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:24.127509 containerd[1649]: time="2026-01-14T13:37:24.127399341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6s6kg,Uid:7031cc44-2bc3-4894-9b05-65dab6c01c28,Namespace:kube-system,Attempt:0,} returns sandbox id \"97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055\"" Jan 14 13:37:24.132980 containerd[1649]: time="2026-01-14T13:37:24.132804508Z" level=info msg="CreateContainer within sandbox \"97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 13:37:24.181831 containerd[1649]: time="2026-01-14T13:37:24.180100195Z" level=info msg="Container 6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a: CDI devices from CRI Config.CDIDevices: []" Jan 14 13:37:24.184136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328239959.mount: Deactivated successfully. Jan 14 13:37:24.192185 containerd[1649]: time="2026-01-14T13:37:24.192130916Z" level=info msg="CreateContainer within sandbox \"97266f4738f45a0c5a652a26f67107e49a5e0e294992d2e3827c358631d54055\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a\"" Jan 14 13:37:24.193066 containerd[1649]: time="2026-01-14T13:37:24.192881997Z" level=info msg="StartContainer for \"6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a\"" Jan 14 13:37:24.195772 containerd[1649]: time="2026-01-14T13:37:24.195722442Z" level=info msg="connecting to shim 6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a" address="unix:///run/containerd/s/340d0dd7e339e3b92920072a0bbe07a5ea6d62799af3cc2247762af0c85ccbac" protocol=ttrpc version=3 Jan 14 13:37:24.225839 systemd[1]: Started cri-containerd-6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a.scope - libcontainer container 6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a. Jan 14 13:37:24.244000 audit: BPF prog-id=245 op=LOAD Jan 14 13:37:24.246000 audit: BPF prog-id=246 op=LOAD Jan 14 13:37:24.246000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.246000 audit: BPF prog-id=246 op=UNLOAD Jan 14 13:37:24.246000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.247000 audit: BPF prog-id=247 op=LOAD Jan 14 13:37:24.247000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.247000 audit: BPF prog-id=248 op=LOAD Jan 14 13:37:24.247000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.247000 audit: BPF prog-id=248 op=UNLOAD Jan 14 13:37:24.247000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.247000 audit: BPF prog-id=247 op=UNLOAD Jan 14 13:37:24.247000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.247000 audit: BPF prog-id=249 op=LOAD Jan 14 13:37:24.247000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4800 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:24.247000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666393762656663366461306636376162363665353830653134333537 Jan 14 13:37:24.275834 containerd[1649]: time="2026-01-14T13:37:24.275779731Z" level=info msg="StartContainer for \"6f97befc6da0f67ab66e580e14357ec9a714be5f9be30d13289c3c3b176c9c2a\" returns successfully" Jan 14 13:37:24.641155 containerd[1649]: time="2026-01-14T13:37:24.641095790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,}" Jan 14 13:37:24.641590 containerd[1649]: time="2026-01-14T13:37:24.641097138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,}" Jan 14 13:37:24.877158 systemd-networkd[1555]: cali0a8bde6a086: Link UP Jan 14 13:37:24.878849 systemd-networkd[1555]: cali0a8bde6a086: Gained carrier Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.711 [INFO][4878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0 goldmane-666569f655- calico-system 55d7c70a-6e6b-4527-8616-3cbcdf2d3394 817 0 2026-01-14 13:36:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com goldmane-666569f655-wtc4j eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0a8bde6a086 [] [] }} ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.711 [INFO][4878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.788 [INFO][4902] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" HandleID="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Workload="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.789 [INFO][4902] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" HandleID="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Workload="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb5e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-414dr.gb1.brightbox.com", "pod":"goldmane-666569f655-wtc4j", "timestamp":"2026-01-14 13:37:24.788875068 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.789 [INFO][4902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.789 [INFO][4902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.789 [INFO][4902] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.804 [INFO][4902] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.811 [INFO][4902] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.817 [INFO][4902] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.819 [INFO][4902] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.822 [INFO][4902] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.822 [INFO][4902] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.824 [INFO][4902] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182 Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.837 [INFO][4902] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4902] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.135/26] block=192.168.127.128/26 handle="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4902] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.135/26] handle="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:24.905222 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4902] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.135/26] IPv6=[] ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" HandleID="k8s-pod-network.f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Workload="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.864 [INFO][4878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55d7c70a-6e6b-4527-8616-3cbcdf2d3394", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-wtc4j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a8bde6a086", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.866 [INFO][4878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.135/32] ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.867 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a8bde6a086 ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.879 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.880 [INFO][4878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"55d7c70a-6e6b-4527-8616-3cbcdf2d3394", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182", Pod:"goldmane-666569f655-wtc4j", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.127.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0a8bde6a086", MAC:"ee:55:4a:e8:39:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:24.909232 containerd[1649]: 2026-01-14 13:37:24.893 [INFO][4878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" Namespace="calico-system" Pod="goldmane-666569f655-wtc4j" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-goldmane--666569f655--wtc4j-eth0" Jan 14 13:37:24.987993 containerd[1649]: time="2026-01-14T13:37:24.987739720Z" level=info msg="connecting to shim f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182" address="unix:///run/containerd/s/48ff9c55f343045dc5c75af30b50007ce086e272b92eef7169d24eb181fe4a55" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:25.000000 audit[4938]: NETFILTER_CFG table=filter:132 family=2 entries=60 op=nft_register_chain pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:25.000000 audit[4938]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7fff71354be0 a2=0 a3=7fff71354bcc items=0 ppid=4547 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.000000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:25.024605 systemd-networkd[1555]: calibbd1dae0828: Link UP Jan 14 13:37:25.027035 systemd-networkd[1555]: calibbd1dae0828: Gained carrier Jan 14 13:37:25.057842 systemd[1]: Started cri-containerd-f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182.scope - libcontainer container f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182. Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.732 [INFO][4876] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0 calico-apiserver-556cb8cff8- calico-apiserver 09be4418-7a52-4e16-b65e-453c324deb2d 818 0 2026-01-14 13:36:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556cb8cff8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-414dr.gb1.brightbox.com calico-apiserver-556cb8cff8-69s5c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibbd1dae0828 [] [] }} ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.733 [INFO][4876] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.789 [INFO][4907] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" HandleID="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.790 [INFO][4907] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" HandleID="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-414dr.gb1.brightbox.com", "pod":"calico-apiserver-556cb8cff8-69s5c", "timestamp":"2026-01-14 13:37:24.789042594 +0000 UTC"}, Hostname:"srv-414dr.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.790 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.859 [INFO][4907] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-414dr.gb1.brightbox.com' Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.905 [INFO][4907] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.918 [INFO][4907] ipam/ipam.go 394: Looking up existing affinities for host host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.929 [INFO][4907] ipam/ipam.go 511: Trying affinity for 192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.933 [INFO][4907] ipam/ipam.go 158: Attempting to load block cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.947 [INFO][4907] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.127.128/26 host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.947 [INFO][4907] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.127.128/26 handle="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.955 [INFO][4907] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259 Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.970 [INFO][4907] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.127.128/26 handle="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.994 [INFO][4907] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.127.136/26] block=192.168.127.128/26 handle="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.995 [INFO][4907] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.127.136/26] handle="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" host="srv-414dr.gb1.brightbox.com" Jan 14 13:37:25.065327 containerd[1649]: 2026-01-14 13:37:24.995 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 13:37:25.069150 containerd[1649]: 2026-01-14 13:37:24.995 [INFO][4907] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.127.136/26] IPv6=[] ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" HandleID="k8s-pod-network.2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Workload="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.069150 containerd[1649]: 2026-01-14 13:37:25.001 [INFO][4876] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0", GenerateName:"calico-apiserver-556cb8cff8-", Namespace:"calico-apiserver", SelfLink:"", UID:"09be4418-7a52-4e16-b65e-453c324deb2d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556cb8cff8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-556cb8cff8-69s5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbd1dae0828", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:25.069150 containerd[1649]: 2026-01-14 13:37:25.003 [INFO][4876] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.127.136/32] ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.069150 containerd[1649]: 2026-01-14 13:37:25.003 [INFO][4876] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibbd1dae0828 ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.069150 containerd[1649]: 2026-01-14 13:37:25.028 [INFO][4876] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.069921 containerd[1649]: 2026-01-14 13:37:25.029 [INFO][4876] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0", GenerateName:"calico-apiserver-556cb8cff8-", Namespace:"calico-apiserver", SelfLink:"", UID:"09be4418-7a52-4e16-b65e-453c324deb2d", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 13, 36, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556cb8cff8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-414dr.gb1.brightbox.com", ContainerID:"2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259", Pod:"calico-apiserver-556cb8cff8-69s5c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.127.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibbd1dae0828", MAC:"52:38:27:86:eb:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 13:37:25.069921 containerd[1649]: 2026-01-14 13:37:25.062 [INFO][4876] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" Namespace="calico-apiserver" Pod="calico-apiserver-556cb8cff8-69s5c" WorkloadEndpoint="srv--414dr.gb1.brightbox.com-k8s-calico--apiserver--556cb8cff8--69s5c-eth0" Jan 14 13:37:25.105000 audit: BPF prog-id=250 op=LOAD Jan 14 13:37:25.108000 audit: BPF prog-id=251 op=LOAD Jan 14 13:37:25.108000 audit[4951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.109000 audit: BPF prog-id=251 op=UNLOAD Jan 14 13:37:25.109000 audit[4951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.109000 audit: BPF prog-id=252 op=LOAD Jan 14 13:37:25.109000 audit[4951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.110000 audit: BPF prog-id=253 op=LOAD Jan 14 13:37:25.110000 audit[4951]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.111000 audit: BPF prog-id=253 op=UNLOAD Jan 14 13:37:25.111000 audit[4951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.111000 audit: BPF prog-id=252 op=UNLOAD Jan 14 13:37:25.111000 audit[4951]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.111000 audit: BPF prog-id=254 op=LOAD Jan 14 13:37:25.111000 audit[4951]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=4937 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631343264646430643234323962666263393839353765386234343766 Jan 14 13:37:25.118486 containerd[1649]: time="2026-01-14T13:37:25.118395809Z" level=info msg="connecting to shim 2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259" address="unix:///run/containerd/s/b551e814eda63b0fc172c6440696859ade2c6de6c6c5b00ca5b14aa6f9875276" namespace=k8s.io protocol=ttrpc version=3 Jan 14 13:37:25.118000 audit[4982]: NETFILTER_CFG table=filter:133 family=2 entries=57 op=nft_register_chain pid=4982 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 13:37:25.118000 audit[4982]: SYSCALL arch=c000003e syscall=46 success=yes exit=27812 a0=3 a1=7ffffae58320 a2=0 a3=7ffffae5830c items=0 ppid=4547 pid=4982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.118000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 13:37:25.205229 systemd[1]: Started cri-containerd-2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259.scope - libcontainer container 2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259. Jan 14 13:37:25.210367 kubelet[2966]: I0114 13:37:25.210305 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6s6kg" podStartSLOduration=59.210279069 podStartE2EDuration="59.210279069s" podCreationTimestamp="2026-01-14 13:36:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 13:37:25.181812403 +0000 UTC m=+64.755886027" watchObservedRunningTime="2026-01-14 13:37:25.210279069 +0000 UTC m=+64.784352650" Jan 14 13:37:25.254000 audit[5020]: NETFILTER_CFG table=filter:134 family=2 entries=14 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:25.254000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed5a8eea0 a2=0 a3=7ffed5a8ee8c items=0 ppid=3099 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:25.260000 audit[5020]: NETFILTER_CFG table=nat:135 family=2 entries=44 op=nft_register_rule pid=5020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:25.260000 audit[5020]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffed5a8eea0 a2=0 a3=7ffed5a8ee8c items=0 ppid=3099 pid=5020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:25.271520 containerd[1649]: time="2026-01-14T13:37:25.271471216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wtc4j,Uid:55d7c70a-6e6b-4527-8616-3cbcdf2d3394,Namespace:calico-system,Attempt:0,} returns sandbox id \"f142ddd0d2429bfbc98957e8b447f87b456fe8007b0ba9324143130d3eda4182\"" Jan 14 13:37:25.276151 containerd[1649]: time="2026-01-14T13:37:25.276119171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:37:25.284000 audit: BPF prog-id=255 op=LOAD Jan 14 13:37:25.284000 audit: BPF prog-id=256 op=LOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=256 op=UNLOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=257 op=LOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=258 op=LOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=258 op=UNLOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=257 op=UNLOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.284000 audit: BPF prog-id=259 op=LOAD Jan 14 13:37:25.284000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4988 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.284000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263383336306336326336626139623438313833623661303463303437 Jan 14 13:37:25.294000 audit[5027]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:25.294000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff15b50ba0 a2=0 a3=7fff15b50b8c items=0 ppid=3099 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:25.309799 systemd-networkd[1555]: cali206ba7469ef: Gained IPv6LL Jan 14 13:37:25.307000 audit[5027]: NETFILTER_CFG table=nat:137 family=2 entries=56 op=nft_register_chain pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:25.307000 audit[5027]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7fff15b50ba0 a2=0 a3=7fff15b50b8c items=0 ppid=3099 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:25.307000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:25.377156 containerd[1649]: time="2026-01-14T13:37:25.377082557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556cb8cff8-69s5c,Uid:09be4418-7a52-4e16-b65e-453c324deb2d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c8360c62c6ba9b48183b6a04c04714e8e203f6e6ee236677901886d4df69259\"" Jan 14 13:37:25.593052 containerd[1649]: time="2026-01-14T13:37:25.592749851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:25.594219 containerd[1649]: time="2026-01-14T13:37:25.594074241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:25.594219 containerd[1649]: time="2026-01-14T13:37:25.594128894Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:37:25.594646 kubelet[2966]: E0114 13:37:25.594598 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:37:25.595077 kubelet[2966]: E0114 13:37:25.594673 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:37:25.595077 kubelet[2966]: E0114 13:37:25.594976 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:25.596064 containerd[1649]: time="2026-01-14T13:37:25.596016803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:37:25.596982 kubelet[2966]: E0114 13:37:25.596937 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:25.910384 containerd[1649]: time="2026-01-14T13:37:25.910246196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:25.912224 containerd[1649]: time="2026-01-14T13:37:25.912132768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:37:25.912224 containerd[1649]: time="2026-01-14T13:37:25.912186100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:25.912511 kubelet[2966]: E0114 13:37:25.912444 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:25.913079 kubelet[2966]: E0114 13:37:25.912520 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:25.913079 kubelet[2966]: E0114 13:37:25.912748 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhl2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:25.914480 kubelet[2966]: E0114 13:37:25.914407 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:26.137532 kubelet[2966]: E0114 13:37:26.137431 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:26.141170 kubelet[2966]: E0114 13:37:26.141098 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:26.141965 systemd-networkd[1555]: calibbd1dae0828: Gained IPv6LL Jan 14 13:37:26.333813 systemd-networkd[1555]: cali0a8bde6a086: Gained IPv6LL Jan 14 13:37:26.337000 audit[5036]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:26.337000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5924e690 a2=0 a3=7fff5924e67c items=0 ppid=3099 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:26.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:26.344000 audit[5036]: NETFILTER_CFG table=nat:139 family=2 entries=20 op=nft_register_rule pid=5036 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:37:26.344000 audit[5036]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fff5924e690 a2=0 a3=7fff5924e67c items=0 ppid=3099 pid=5036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:26.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:37:27.145618 kubelet[2966]: E0114 13:37:27.145550 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:27.147901 kubelet[2966]: E0114 13:37:27.146067 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:27.641613 containerd[1649]: time="2026-01-14T13:37:27.641255047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:37:27.957389 containerd[1649]: time="2026-01-14T13:37:27.957070128Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:27.958453 containerd[1649]: time="2026-01-14T13:37:27.958335040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:37:27.958453 containerd[1649]: time="2026-01-14T13:37:27.958414261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:27.958729 kubelet[2966]: E0114 13:37:27.958633 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:27.958729 kubelet[2966]: E0114 13:37:27.958697 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:27.958897 kubelet[2966]: E0114 13:37:27.958832 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29af2dd3753940a0809dbeff5a3da59e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:27.961317 containerd[1649]: time="2026-01-14T13:37:27.961278063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:37:28.347534 containerd[1649]: time="2026-01-14T13:37:28.347230773Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:28.349309 containerd[1649]: time="2026-01-14T13:37:28.349163390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:37:28.349309 containerd[1649]: time="2026-01-14T13:37:28.349262059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:28.349498 kubelet[2966]: E0114 13:37:28.349437 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:28.350675 kubelet[2966]: E0114 13:37:28.349498 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:28.350675 kubelet[2966]: E0114 13:37:28.349662 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:28.351080 kubelet[2966]: E0114 13:37:28.350885 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:37:28.641837 containerd[1649]: time="2026-01-14T13:37:28.641473631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:37:28.957003 containerd[1649]: time="2026-01-14T13:37:28.956510669Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:28.958389 containerd[1649]: time="2026-01-14T13:37:28.958236713Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:37:28.958389 containerd[1649]: time="2026-01-14T13:37:28.958303433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:28.958768 kubelet[2966]: E0114 13:37:28.958693 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:28.958969 kubelet[2966]: E0114 13:37:28.958779 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:28.959111 kubelet[2966]: E0114 13:37:28.958984 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:28.961742 containerd[1649]: time="2026-01-14T13:37:28.961708384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:37:29.278492 containerd[1649]: time="2026-01-14T13:37:29.278285922Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:29.280103 containerd[1649]: time="2026-01-14T13:37:29.280053634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:37:29.280201 containerd[1649]: time="2026-01-14T13:37:29.280167651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:29.280738 kubelet[2966]: E0114 13:37:29.280389 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:29.280738 kubelet[2966]: E0114 13:37:29.280457 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:29.280738 kubelet[2966]: E0114 13:37:29.280649 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:29.281995 kubelet[2966]: E0114 13:37:29.281924 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:29.641918 containerd[1649]: time="2026-01-14T13:37:29.641753549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:37:29.957482 containerd[1649]: time="2026-01-14T13:37:29.957239769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:29.958482 containerd[1649]: time="2026-01-14T13:37:29.958431505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:37:29.958685 containerd[1649]: time="2026-01-14T13:37:29.958619085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:29.958946 kubelet[2966]: E0114 13:37:29.958855 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:29.959507 kubelet[2966]: E0114 13:37:29.958966 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:29.959507 kubelet[2966]: E0114 13:37:29.959381 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:29.960609 containerd[1649]: time="2026-01-14T13:37:29.960254498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:37:29.961261 kubelet[2966]: E0114 13:37:29.961140 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:37:30.285433 containerd[1649]: time="2026-01-14T13:37:30.285190683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:30.287879 containerd[1649]: time="2026-01-14T13:37:30.287380679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:37:30.288605 containerd[1649]: time="2026-01-14T13:37:30.287492577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:30.288687 kubelet[2966]: E0114 13:37:30.288539 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:30.288687 kubelet[2966]: E0114 13:37:30.288647 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:30.289345 kubelet[2966]: E0114 13:37:30.288860 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6v8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:30.290750 kubelet[2966]: E0114 13:37:30.290447 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:37:38.643793 containerd[1649]: time="2026-01-14T13:37:38.642431618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:37:38.960016 containerd[1649]: time="2026-01-14T13:37:38.959499097Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:38.961346 containerd[1649]: time="2026-01-14T13:37:38.961292966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:37:38.961625 containerd[1649]: time="2026-01-14T13:37:38.961476342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:38.962379 kubelet[2966]: E0114 13:37:38.962162 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:38.963532 kubelet[2966]: E0114 13:37:38.962336 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:38.963532 kubelet[2966]: E0114 13:37:38.963061 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhl2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:38.964377 kubelet[2966]: E0114 13:37:38.964295 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:39.641891 kubelet[2966]: E0114 13:37:39.641712 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:37:40.401639 kernel: kauditd_printk_skb: 105 callbacks suppressed Jan 14 13:37:40.401955 kernel: audit: type=1130 audit(1768397860.395:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.49.6:22-165.232.95.204:50382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:40.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.49.6:22-165.232.95.204:50382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:40.395381 systemd[1]: Started sshd@9-10.230.49.6:22-165.232.95.204:50382.service - OpenSSH per-connection server daemon (165.232.95.204:50382). Jan 14 13:37:40.495682 sshd[5057]: Connection closed by 165.232.95.204 port 50382 Jan 14 13:37:40.498787 systemd[1]: sshd@9-10.230.49.6:22-165.232.95.204:50382.service: Deactivated successfully. Jan 14 13:37:40.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.49.6:22-165.232.95.204:50382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:40.504635 kernel: audit: type=1131 audit(1768397860.499:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.49.6:22-165.232.95.204:50382 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:40.643875 kubelet[2966]: E0114 13:37:40.641921 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:37:40.650606 containerd[1649]: time="2026-01-14T13:37:40.650487129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:37:40.962833 containerd[1649]: time="2026-01-14T13:37:40.962647686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:40.966759 containerd[1649]: time="2026-01-14T13:37:40.966678071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:40.967183 containerd[1649]: time="2026-01-14T13:37:40.967111073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:37:40.967982 kubelet[2966]: E0114 13:37:40.967883 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:37:40.967982 kubelet[2966]: E0114 13:37:40.967960 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:37:40.968796 kubelet[2966]: E0114 13:37:40.968156 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:40.970420 kubelet[2966]: E0114 13:37:40.970196 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:41.641559 kubelet[2966]: E0114 13:37:41.641435 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:43.640930 kubelet[2966]: E0114 13:37:43.640850 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:37:48.750934 systemd[1]: Started sshd@10-10.230.49.6:22-68.220.241.50:46460.service - OpenSSH per-connection server daemon (68.220.241.50:46460). Jan 14 13:37:48.763091 kernel: audit: type=1130 audit(1768397868.749:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.49.6:22-68.220.241.50:46460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:48.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.49.6:22-68.220.241.50:46460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:49.353000 audit[5098]: USER_ACCT pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.363314 sshd[5098]: Accepted publickey for core from 68.220.241.50 port 46460 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:37:49.366370 kernel: audit: type=1101 audit(1768397869.353:748): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.367429 sshd-session[5098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:37:49.363000 audit[5098]: CRED_ACQ pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.375598 kernel: audit: type=1103 audit(1768397869.363:749): pid=5098 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.384778 kernel: audit: type=1006 audit(1768397869.363:750): pid=5098 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 13:37:49.384848 kernel: audit: type=1300 audit(1768397869.363:750): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0b9326a0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:49.363000 audit[5098]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe0b9326a0 a2=3 a3=0 items=0 ppid=1 pid=5098 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:49.397197 kernel: audit: type=1327 audit(1768397869.363:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:37:49.363000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:37:49.392659 systemd-logind[1615]: New session 13 of user core. Jan 14 13:37:49.402515 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 13:37:49.409000 audit[5098]: USER_START pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.417598 kernel: audit: type=1105 audit(1768397869.409:751): pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.419000 audit[5106]: CRED_ACQ pid=5106 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:49.426598 kernel: audit: type=1103 audit(1768397869.419:752): pid=5106 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:50.447706 sshd[5106]: Connection closed by 68.220.241.50 port 46460 Jan 14 13:37:50.452014 sshd-session[5098]: pam_unix(sshd:session): session closed for user core Jan 14 13:37:50.455000 audit[5098]: USER_END pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:50.464703 systemd[1]: sshd@10-10.230.49.6:22-68.220.241.50:46460.service: Deactivated successfully. Jan 14 13:37:50.467588 kernel: audit: type=1106 audit(1768397870.455:753): pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:50.470142 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 13:37:50.455000 audit[5098]: CRED_DISP pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:50.479629 kernel: audit: type=1104 audit(1768397870.455:754): pid=5098 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:50.485131 systemd-logind[1615]: Session 13 logged out. Waiting for processes to exit. Jan 14 13:37:50.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.49.6:22-68.220.241.50:46460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:50.487067 systemd-logind[1615]: Removed session 13. Jan 14 13:37:50.662012 containerd[1649]: time="2026-01-14T13:37:50.660478566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:37:50.664055 kubelet[2966]: E0114 13:37:50.659244 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:37:50.994045 containerd[1649]: time="2026-01-14T13:37:50.993783602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:50.995523 containerd[1649]: time="2026-01-14T13:37:50.995390571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:37:50.995663 containerd[1649]: time="2026-01-14T13:37:50.995393488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:50.996904 kubelet[2966]: E0114 13:37:50.995936 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:50.996904 kubelet[2966]: E0114 13:37:50.996035 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:37:51.002116 kubelet[2966]: E0114 13:37:51.002030 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29af2dd3753940a0809dbeff5a3da59e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:51.006062 containerd[1649]: time="2026-01-14T13:37:51.005201840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:37:51.314859 containerd[1649]: time="2026-01-14T13:37:51.314707574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:51.316096 containerd[1649]: time="2026-01-14T13:37:51.316034529Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:37:51.317719 containerd[1649]: time="2026-01-14T13:37:51.317643122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:51.318442 kubelet[2966]: E0114 13:37:51.318313 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:51.319012 kubelet[2966]: E0114 13:37:51.318625 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:37:51.319012 kubelet[2966]: E0114 13:37:51.318812 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:51.320494 kubelet[2966]: E0114 13:37:51.320455 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:37:52.643589 containerd[1649]: time="2026-01-14T13:37:52.641327575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:37:52.972431 containerd[1649]: time="2026-01-14T13:37:52.972209891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:52.973501 containerd[1649]: time="2026-01-14T13:37:52.973443442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:37:52.973615 containerd[1649]: time="2026-01-14T13:37:52.973536486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:52.974348 kubelet[2966]: E0114 13:37:52.974017 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:52.974348 kubelet[2966]: E0114 13:37:52.974090 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:37:52.974902 kubelet[2966]: E0114 13:37:52.974439 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:52.976019 containerd[1649]: time="2026-01-14T13:37:52.975303512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:37:53.302144 containerd[1649]: time="2026-01-14T13:37:53.301926580Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:53.305133 containerd[1649]: time="2026-01-14T13:37:53.305073387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:37:53.305228 containerd[1649]: time="2026-01-14T13:37:53.305181910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:53.305889 kubelet[2966]: E0114 13:37:53.305591 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:53.305889 kubelet[2966]: E0114 13:37:53.305664 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:37:53.306371 containerd[1649]: time="2026-01-14T13:37:53.306310657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:37:53.307248 kubelet[2966]: E0114 13:37:53.307094 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6v8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:53.309643 kubelet[2966]: E0114 13:37:53.308457 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:37:53.617378 containerd[1649]: time="2026-01-14T13:37:53.617277628Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:53.619853 containerd[1649]: time="2026-01-14T13:37:53.619562541Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:37:53.620022 containerd[1649]: time="2026-01-14T13:37:53.619642518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:53.620673 kubelet[2966]: E0114 13:37:53.620523 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:53.620673 kubelet[2966]: E0114 13:37:53.620600 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:37:53.623225 kubelet[2966]: E0114 13:37:53.622862 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:53.626587 kubelet[2966]: E0114 13:37:53.625029 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:37:54.643154 kubelet[2966]: E0114 13:37:54.641601 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:37:55.553807 systemd[1]: Started sshd@11-10.230.49.6:22-68.220.241.50:33434.service - OpenSSH per-connection server daemon (68.220.241.50:33434). Jan 14 13:37:55.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.49.6:22-68.220.241.50:33434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:55.559420 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:37:55.559497 kernel: audit: type=1130 audit(1768397875.552:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.49.6:22-68.220.241.50:33434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:56.099000 audit[5121]: USER_ACCT pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.105731 sshd[5121]: Accepted publickey for core from 68.220.241.50 port 33434 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:37:56.110606 kernel: audit: type=1101 audit(1768397876.099:757): pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.112895 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:37:56.109000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.124638 kernel: audit: type=1103 audit(1768397876.109:758): pid=5121 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.130865 kernel: audit: type=1006 audit(1768397876.109:759): pid=5121 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 13:37:56.109000 audit[5121]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12c72ff0 a2=3 a3=0 items=0 ppid=1 pid=5121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:56.135008 systemd-logind[1615]: New session 14 of user core. Jan 14 13:37:56.138634 kernel: audit: type=1300 audit(1768397876.109:759): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff12c72ff0 a2=3 a3=0 items=0 ppid=1 pid=5121 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:37:56.109000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:37:56.143323 kernel: audit: type=1327 audit(1768397876.109:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:37:56.142178 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 13:37:56.148000 audit[5121]: USER_START pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.154000 audit[5125]: CRED_ACQ pid=5125 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.157859 kernel: audit: type=1105 audit(1768397876.148:760): pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.157969 kernel: audit: type=1103 audit(1768397876.154:761): pid=5125 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.558872 sshd[5125]: Connection closed by 68.220.241.50 port 33434 Jan 14 13:37:56.557218 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Jan 14 13:37:56.557000 audit[5121]: USER_END pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.557000 audit[5121]: CRED_DISP pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.570700 kernel: audit: type=1106 audit(1768397876.557:762): pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.570936 kernel: audit: type=1104 audit(1768397876.557:763): pid=5121 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:37:56.572905 systemd-logind[1615]: Session 14 logged out. Waiting for processes to exit. Jan 14 13:37:56.575312 systemd[1]: sshd@11-10.230.49.6:22-68.220.241.50:33434.service: Deactivated successfully. Jan 14 13:37:56.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.49.6:22-68.220.241.50:33434 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:37:56.581457 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 13:37:56.585099 systemd-logind[1615]: Removed session 14. Jan 14 13:37:56.644587 containerd[1649]: time="2026-01-14T13:37:56.644372780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:37:56.958855 containerd[1649]: time="2026-01-14T13:37:56.958613472Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:37:56.959810 containerd[1649]: time="2026-01-14T13:37:56.959680685Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:37:56.959810 containerd[1649]: time="2026-01-14T13:37:56.959743267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:37:56.960094 kubelet[2966]: E0114 13:37:56.960027 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:56.960651 kubelet[2966]: E0114 13:37:56.960110 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:37:56.960651 kubelet[2966]: E0114 13:37:56.960360 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:37:56.962248 kubelet[2966]: E0114 13:37:56.961857 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:38:01.660080 systemd[1]: Started sshd@12-10.230.49.6:22-68.220.241.50:33442.service - OpenSSH per-connection server daemon (68.220.241.50:33442). Jan 14 13:38:01.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.49.6:22-68.220.241.50:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:01.663399 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:01.663596 kernel: audit: type=1130 audit(1768397881.659:765): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.49.6:22-68.220.241.50:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:02.172176 sshd[5146]: Accepted publickey for core from 68.220.241.50 port 33442 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:02.170000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.180646 kernel: audit: type=1101 audit(1768397882.170:766): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.180762 kernel: audit: type=1103 audit(1768397882.176:767): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.176000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.181443 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:02.192598 kernel: audit: type=1006 audit(1768397882.177:768): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 13:38:02.177000 audit[5146]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03115c30 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:02.200601 kernel: audit: type=1300 audit(1768397882.177:768): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff03115c30 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:02.177000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:02.205196 systemd-logind[1615]: New session 15 of user core. Jan 14 13:38:02.205650 kernel: audit: type=1327 audit(1768397882.177:768): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:02.214197 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 13:38:02.219000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.227625 kernel: audit: type=1105 audit(1768397882.219:769): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.227000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.235615 kernel: audit: type=1103 audit(1768397882.227:770): pid=5150 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.556181 sshd[5150]: Connection closed by 68.220.241.50 port 33442 Jan 14 13:38:02.557945 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:02.559000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.565960 systemd[1]: sshd@12-10.230.49.6:22-68.220.241.50:33442.service: Deactivated successfully. Jan 14 13:38:02.568779 kernel: audit: type=1106 audit(1768397882.559:771): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.570029 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 13:38:02.572379 systemd-logind[1615]: Session 15 logged out. Waiting for processes to exit. Jan 14 13:38:02.559000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.579131 systemd-logind[1615]: Removed session 15. Jan 14 13:38:02.579667 kernel: audit: type=1104 audit(1768397882.559:772): pid=5146 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:02.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.49.6:22-68.220.241.50:33442 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:02.641373 kubelet[2966]: E0114 13:38:02.641284 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:38:04.645865 kubelet[2966]: E0114 13:38:04.644273 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:38:04.647561 containerd[1649]: time="2026-01-14T13:38:04.647213761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:38:04.969107 containerd[1649]: time="2026-01-14T13:38:04.968623748Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:04.970976 containerd[1649]: time="2026-01-14T13:38:04.970800934Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:38:04.970976 containerd[1649]: time="2026-01-14T13:38:04.970857351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:04.971250 kubelet[2966]: E0114 13:38:04.971151 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:04.971250 kubelet[2966]: E0114 13:38:04.971238 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:04.971689 kubelet[2966]: E0114 13:38:04.971451 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhl2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:04.973075 kubelet[2966]: E0114 13:38:04.973020 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:38:07.661160 systemd[1]: Started sshd@13-10.230.49.6:22-68.220.241.50:59358.service - OpenSSH per-connection server daemon (68.220.241.50:59358). Jan 14 13:38:07.668616 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:07.668900 kernel: audit: type=1130 audit(1768397887.662:774): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.49.6:22-68.220.241.50:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:07.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.49.6:22-68.220.241.50:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:08.218167 sshd[5165]: Accepted publickey for core from 68.220.241.50 port 59358 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:08.217000 audit[5165]: USER_ACCT pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.227594 kernel: audit: type=1101 audit(1768397888.217:775): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.230155 sshd-session[5165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:08.227000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.236646 kernel: audit: type=1103 audit(1768397888.227:776): pid=5165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.236741 kernel: audit: type=1006 audit(1768397888.227:777): pid=5165 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 13:38:08.238589 kernel: audit: type=1300 audit(1768397888.227:777): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a285460 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:08.227000 audit[5165]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2a285460 a2=3 a3=0 items=0 ppid=1 pid=5165 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:08.227000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:08.252587 kernel: audit: type=1327 audit(1768397888.227:777): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:08.263164 systemd-logind[1615]: New session 16 of user core. Jan 14 13:38:08.265819 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 13:38:08.278731 kernel: audit: type=1105 audit(1768397888.271:778): pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.271000 audit[5165]: USER_START pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.282000 audit[5169]: CRED_ACQ pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.287621 kernel: audit: type=1103 audit(1768397888.282:779): pid=5169 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.645196 kubelet[2966]: E0114 13:38:08.644925 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:38:08.646144 containerd[1649]: time="2026-01-14T13:38:08.645340650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:38:08.721593 sshd[5169]: Connection closed by 68.220.241.50 port 59358 Jan 14 13:38:08.722242 sshd-session[5165]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:08.735966 kernel: audit: type=1106 audit(1768397888.723:780): pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.723000 audit[5165]: USER_END pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.741997 systemd[1]: sshd@13-10.230.49.6:22-68.220.241.50:59358.service: Deactivated successfully. Jan 14 13:38:08.723000 audit[5165]: CRED_DISP pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.754394 kernel: audit: type=1104 audit(1768397888.723:781): pid=5165 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:08.753419 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 13:38:08.755814 systemd-logind[1615]: Session 16 logged out. Waiting for processes to exit. Jan 14 13:38:08.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.49.6:22-68.220.241.50:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:08.762113 systemd-logind[1615]: Removed session 16. Jan 14 13:38:08.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.49.6:22-68.220.241.50:59366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:08.832006 systemd[1]: Started sshd@14-10.230.49.6:22-68.220.241.50:59366.service - OpenSSH per-connection server daemon (68.220.241.50:59366). Jan 14 13:38:08.961467 containerd[1649]: time="2026-01-14T13:38:08.960701471Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:08.963101 containerd[1649]: time="2026-01-14T13:38:08.963039899Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:38:08.963383 containerd[1649]: time="2026-01-14T13:38:08.963351990Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:08.964597 kubelet[2966]: E0114 13:38:08.963762 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:38:08.964597 kubelet[2966]: E0114 13:38:08.963838 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:38:08.965405 kubelet[2966]: E0114 13:38:08.965258 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:08.966778 kubelet[2966]: E0114 13:38:08.966732 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:38:09.368000 audit[5182]: USER_ACCT pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.372222 sshd[5182]: Accepted publickey for core from 68.220.241.50 port 59366 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:09.373000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.373000 audit[5182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd61a714a0 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:09.373000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:09.375363 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:09.394357 systemd-logind[1615]: New session 17 of user core. Jan 14 13:38:09.403878 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 13:38:09.408000 audit[5182]: USER_START pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.411000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.829063 sshd[5186]: Connection closed by 68.220.241.50 port 59366 Jan 14 13:38:09.828237 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:09.833000 audit[5182]: USER_END pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.833000 audit[5182]: CRED_DISP pid=5182 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:09.837222 systemd[1]: sshd@14-10.230.49.6:22-68.220.241.50:59366.service: Deactivated successfully. Jan 14 13:38:09.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.49.6:22-68.220.241.50:59366 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:09.841549 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 13:38:09.843921 systemd-logind[1615]: Session 17 logged out. Waiting for processes to exit. Jan 14 13:38:09.847107 systemd-logind[1615]: Removed session 17. Jan 14 13:38:09.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.49.6:22-68.220.241.50:59374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:09.933781 systemd[1]: Started sshd@15-10.230.49.6:22-68.220.241.50:59374.service - OpenSSH per-connection server daemon (68.220.241.50:59374). Jan 14 13:38:10.464000 audit[5196]: USER_ACCT pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.465325 sshd[5196]: Accepted publickey for core from 68.220.241.50 port 59374 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:10.466000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.466000 audit[5196]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcddd18ec0 a2=3 a3=0 items=0 ppid=1 pid=5196 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:10.466000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:10.468620 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:10.476732 systemd-logind[1615]: New session 18 of user core. Jan 14 13:38:10.486936 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 13:38:10.494000 audit[5196]: USER_START pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.498000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.641475 kubelet[2966]: E0114 13:38:10.641070 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:38:10.878847 sshd[5200]: Connection closed by 68.220.241.50 port 59374 Jan 14 13:38:10.881772 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:10.884000 audit[5196]: USER_END pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.884000 audit[5196]: CRED_DISP pid=5196 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:10.888591 systemd-logind[1615]: Session 18 logged out. Waiting for processes to exit. Jan 14 13:38:10.889470 systemd[1]: sshd@15-10.230.49.6:22-68.220.241.50:59374.service: Deactivated successfully. Jan 14 13:38:10.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.49.6:22-68.220.241.50:59374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:10.893899 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 13:38:10.899420 systemd-logind[1615]: Removed session 18. Jan 14 13:38:14.646068 kubelet[2966]: E0114 13:38:14.645760 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:38:15.981215 systemd[1]: Started sshd@16-10.230.49.6:22-68.220.241.50:50414.service - OpenSSH per-connection server daemon (68.220.241.50:50414). Jan 14 13:38:15.993412 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 13:38:15.993494 kernel: audit: type=1130 audit(1768397895.980:801): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.49.6:22-68.220.241.50:50414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:15.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.49.6:22-68.220.241.50:50414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:16.542000 audit[5245]: USER_ACCT pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.552652 kernel: audit: type=1101 audit(1768397896.542:802): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.550550 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:16.555833 sshd[5245]: Accepted publickey for core from 68.220.241.50 port 50414 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:16.563907 kernel: audit: type=1103 audit(1768397896.548:803): pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.548000 audit[5245]: CRED_ACQ pid=5245 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.571619 kernel: audit: type=1006 audit(1768397896.548:804): pid=5245 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 13:38:16.572278 kernel: audit: type=1300 audit(1768397896.548:804): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffed52e1c0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:16.548000 audit[5245]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffed52e1c0 a2=3 a3=0 items=0 ppid=1 pid=5245 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:16.548000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:16.577325 kernel: audit: type=1327 audit(1768397896.548:804): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:16.578653 systemd-logind[1615]: New session 19 of user core. Jan 14 13:38:16.588805 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 13:38:16.599000 audit[5245]: USER_START pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.607591 kernel: audit: type=1105 audit(1768397896.599:805): pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.603000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:16.613594 kernel: audit: type=1103 audit(1768397896.603:806): pid=5249 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:17.005600 sshd[5249]: Connection closed by 68.220.241.50 port 50414 Jan 14 13:38:17.005836 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:17.011000 audit[5245]: USER_END pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:17.022601 kernel: audit: type=1106 audit(1768397897.011:807): pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:17.023738 systemd[1]: sshd@16-10.230.49.6:22-68.220.241.50:50414.service: Deactivated successfully. Jan 14 13:38:17.011000 audit[5245]: CRED_DISP pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:17.031362 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 13:38:17.033065 kernel: audit: type=1104 audit(1768397897.011:808): pid=5245 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:17.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.49.6:22-68.220.241.50:50414 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:17.037290 systemd-logind[1615]: Session 19 logged out. Waiting for processes to exit. Jan 14 13:38:17.039786 systemd-logind[1615]: Removed session 19. Jan 14 13:38:17.640962 kubelet[2966]: E0114 13:38:17.640864 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:38:19.641667 kubelet[2966]: E0114 13:38:19.640261 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:38:21.641597 kubelet[2966]: E0114 13:38:21.641529 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:38:22.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.49.6:22-68.220.241.50:50418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:22.121767 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:22.121862 kernel: audit: type=1130 audit(1768397902.115:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.49.6:22-68.220.241.50:50418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:22.115954 systemd[1]: Started sshd@17-10.230.49.6:22-68.220.241.50:50418.service - OpenSSH per-connection server daemon (68.220.241.50:50418). Jan 14 13:38:22.655000 audit[5263]: USER_ACCT pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.664203 kernel: audit: type=1101 audit(1768397902.655:811): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.662000 audit[5263]: CRED_ACQ pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.664422 sshd[5263]: Accepted publickey for core from 68.220.241.50 port 50418 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:22.666386 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:22.669616 kernel: audit: type=1103 audit(1768397902.662:812): pid=5263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.676597 kernel: audit: type=1006 audit(1768397902.662:813): pid=5263 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 13:38:22.662000 audit[5263]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0faaafd0 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:22.683606 kernel: audit: type=1300 audit(1768397902.662:813): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0faaafd0 a2=3 a3=0 items=0 ppid=1 pid=5263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:22.662000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:22.685629 systemd-logind[1615]: New session 20 of user core. Jan 14 13:38:22.687868 kernel: audit: type=1327 audit(1768397902.662:813): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:22.692383 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 13:38:22.704614 kernel: audit: type=1105 audit(1768397902.697:814): pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.697000 audit[5263]: USER_START pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.704000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:22.710605 kernel: audit: type=1103 audit(1768397902.704:815): pid=5267 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:23.064667 sshd[5267]: Connection closed by 68.220.241.50 port 50418 Jan 14 13:38:23.065998 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:23.079608 kernel: audit: type=1106 audit(1768397903.070:816): pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:23.070000 audit[5263]: USER_END pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:23.078001 systemd[1]: sshd@17-10.230.49.6:22-68.220.241.50:50418.service: Deactivated successfully. Jan 14 13:38:23.082925 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 13:38:23.070000 audit[5263]: CRED_DISP pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:23.089608 kernel: audit: type=1104 audit(1768397903.070:817): pid=5263 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:23.089854 systemd-logind[1615]: Session 20 logged out. Waiting for processes to exit. Jan 14 13:38:23.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.49.6:22-68.220.241.50:50418 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:23.092498 systemd-logind[1615]: Removed session 20. Jan 14 13:38:23.642292 kubelet[2966]: E0114 13:38:23.642220 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:38:25.642327 kubelet[2966]: E0114 13:38:25.641738 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:38:28.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.49.6:22-68.220.241.50:53118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:28.169881 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:28.169935 kernel: audit: type=1130 audit(1768397908.167:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.49.6:22-68.220.241.50:53118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:28.166644 systemd[1]: Started sshd@18-10.230.49.6:22-68.220.241.50:53118.service - OpenSSH per-connection server daemon (68.220.241.50:53118). Jan 14 13:38:28.640331 kubelet[2966]: E0114 13:38:28.640247 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:38:28.675000 audit[5281]: USER_ACCT pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.676230 sshd[5281]: Accepted publickey for core from 68.220.241.50 port 53118 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:28.681589 kernel: audit: type=1101 audit(1768397908.675:820): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.681000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.684229 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:28.688237 kernel: audit: type=1103 audit(1768397908.681:821): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.688758 kernel: audit: type=1006 audit(1768397908.681:822): pid=5281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 13:38:28.681000 audit[5281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc899c2480 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:28.700098 systemd-logind[1615]: New session 21 of user core. Jan 14 13:38:28.704237 kernel: audit: type=1300 audit(1768397908.681:822): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc899c2480 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:28.704315 kernel: audit: type=1327 audit(1768397908.681:822): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:28.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:28.706891 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 13:38:28.716000 audit[5281]: USER_START pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.723586 kernel: audit: type=1105 audit(1768397908.716:823): pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.726000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:28.732612 kernel: audit: type=1103 audit(1768397908.726:824): pid=5285 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:29.060316 sshd[5285]: Connection closed by 68.220.241.50 port 53118 Jan 14 13:38:29.061830 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:29.063000 audit[5281]: USER_END pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:29.074602 kernel: audit: type=1106 audit(1768397909.063:825): pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:29.063000 audit[5281]: CRED_DISP pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:29.076126 systemd[1]: sshd@18-10.230.49.6:22-68.220.241.50:53118.service: Deactivated successfully. Jan 14 13:38:29.080719 kernel: audit: type=1104 audit(1768397909.063:826): pid=5281 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:29.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.49.6:22-68.220.241.50:53118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:29.082005 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 13:38:29.084071 systemd-logind[1615]: Session 21 logged out. Waiting for processes to exit. Jan 14 13:38:29.087145 systemd-logind[1615]: Removed session 21. Jan 14 13:38:29.644087 kubelet[2966]: E0114 13:38:29.643985 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:38:30.643524 kubelet[2966]: E0114 13:38:30.642795 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:38:34.165340 systemd[1]: Started sshd@19-10.230.49.6:22-68.220.241.50:46072.service - OpenSSH per-connection server daemon (68.220.241.50:46072). Jan 14 13:38:34.174316 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:34.174444 kernel: audit: type=1130 audit(1768397914.164:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.49.6:22-68.220.241.50:46072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:34.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.49.6:22-68.220.241.50:46072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:34.641615 kubelet[2966]: E0114 13:38:34.640341 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:38:34.670000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.679960 sshd[5297]: Accepted publickey for core from 68.220.241.50 port 46072 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:34.681387 kernel: audit: type=1101 audit(1768397914.670:829): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.681000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.683926 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:34.687676 kernel: audit: type=1103 audit(1768397914.681:830): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.693606 kernel: audit: type=1006 audit(1768397914.681:831): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 13:38:34.693716 kernel: audit: type=1300 audit(1768397914.681:831): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1124f5d0 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:34.681000 audit[5297]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd1124f5d0 a2=3 a3=0 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:34.681000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:34.701708 kernel: audit: type=1327 audit(1768397914.681:831): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:34.706913 systemd-logind[1615]: New session 22 of user core. Jan 14 13:38:34.713807 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 13:38:34.721000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.728590 kernel: audit: type=1105 audit(1768397914.721:832): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.729000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:34.736610 kernel: audit: type=1103 audit(1768397914.729:833): pid=5301 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:35.084709 sshd[5301]: Connection closed by 68.220.241.50 port 46072 Jan 14 13:38:35.085725 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:35.089000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:35.095378 systemd[1]: sshd@19-10.230.49.6:22-68.220.241.50:46072.service: Deactivated successfully. Jan 14 13:38:35.098507 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 13:38:35.104604 kernel: audit: type=1106 audit(1768397915.089:834): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:35.090000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:35.110635 kernel: audit: type=1104 audit(1768397915.090:835): pid=5297 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:35.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.49.6:22-68.220.241.50:46072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:35.110485 systemd-logind[1615]: Session 22 logged out. Waiting for processes to exit. Jan 14 13:38:35.117534 systemd-logind[1615]: Removed session 22. Jan 14 13:38:37.642991 containerd[1649]: time="2026-01-14T13:38:37.642842160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 13:38:37.958893 containerd[1649]: time="2026-01-14T13:38:37.958529303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:37.960082 containerd[1649]: time="2026-01-14T13:38:37.959872655Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 13:38:37.960503 containerd[1649]: time="2026-01-14T13:38:37.959888087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:37.960807 kubelet[2966]: E0114 13:38:37.960721 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:38:37.961921 kubelet[2966]: E0114 13:38:37.961474 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 13:38:37.961921 kubelet[2966]: E0114 13:38:37.961829 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5fmsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b65bb7cd-xwm2s_calico-system(9af9fcdf-2905-4c21-b8a3-70ab543f6a40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:37.963167 kubelet[2966]: E0114 13:38:37.963095 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:38:38.643591 containerd[1649]: time="2026-01-14T13:38:38.641937346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 13:38:38.963213 containerd[1649]: time="2026-01-14T13:38:38.963061233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:38.964418 containerd[1649]: time="2026-01-14T13:38:38.964364935Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 13:38:38.964497 containerd[1649]: time="2026-01-14T13:38:38.964479798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:38.964840 kubelet[2966]: E0114 13:38:38.964781 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:38:38.965745 kubelet[2966]: E0114 13:38:38.964856 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 13:38:38.965745 kubelet[2966]: E0114 13:38:38.965143 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:38.967951 containerd[1649]: time="2026-01-14T13:38:38.967911288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 13:38:39.288899 containerd[1649]: time="2026-01-14T13:38:39.288724648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:39.290087 containerd[1649]: time="2026-01-14T13:38:39.289865788Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 13:38:39.290087 containerd[1649]: time="2026-01-14T13:38:39.289996876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:39.290332 kubelet[2966]: E0114 13:38:39.290282 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:38:39.290446 kubelet[2966]: E0114 13:38:39.290349 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 13:38:39.290825 kubelet[2966]: E0114 13:38:39.290732 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kg9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7v2t7_calico-system(1584f8ba-fd2c-4903-be8f-c6577809742f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:39.292283 kubelet[2966]: E0114 13:38:39.292229 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:38:40.192127 systemd[1]: Started sshd@20-10.230.49.6:22-68.220.241.50:46086.service - OpenSSH per-connection server daemon (68.220.241.50:46086). Jan 14 13:38:40.204035 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:40.204173 kernel: audit: type=1130 audit(1768397920.191:837): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.49.6:22-68.220.241.50:46086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:40.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.49.6:22-68.220.241.50:46086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:40.743000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.752054 kernel: audit: type=1101 audit(1768397920.743:838): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.752434 sshd[5319]: Accepted publickey for core from 68.220.241.50 port 46086 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:40.754000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.760612 kernel: audit: type=1103 audit(1768397920.754:839): pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.761267 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:40.766598 kernel: audit: type=1006 audit(1768397920.754:840): pid=5319 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 13:38:40.754000 audit[5319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff670bc5c0 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:40.780594 kernel: audit: type=1300 audit(1768397920.754:840): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff670bc5c0 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:40.754000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:40.784592 kernel: audit: type=1327 audit(1768397920.754:840): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:40.786598 systemd-logind[1615]: New session 23 of user core. Jan 14 13:38:40.790659 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 13:38:40.798000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.806614 kernel: audit: type=1105 audit(1768397920.798:841): pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.809000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:40.817793 kernel: audit: type=1103 audit(1768397920.809:842): pid=5325 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:41.260503 sshd[5325]: Connection closed by 68.220.241.50 port 46086 Jan 14 13:38:41.261465 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:41.265000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:41.276991 kernel: audit: type=1106 audit(1768397921.265:843): pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:41.277130 kernel: audit: type=1104 audit(1768397921.269:844): pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:41.269000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:41.280703 systemd[1]: sshd@20-10.230.49.6:22-68.220.241.50:46086.service: Deactivated successfully. Jan 14 13:38:41.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.49.6:22-68.220.241.50:46086 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:41.284487 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 13:38:41.289514 systemd-logind[1615]: Session 23 logged out. Waiting for processes to exit. Jan 14 13:38:41.292764 systemd-logind[1615]: Removed session 23. Jan 14 13:38:41.642110 kubelet[2966]: E0114 13:38:41.641457 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:38:41.644059 containerd[1649]: time="2026-01-14T13:38:41.641690525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:38:41.956624 containerd[1649]: time="2026-01-14T13:38:41.956246113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:41.957521 containerd[1649]: time="2026-01-14T13:38:41.957392427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:38:41.957521 containerd[1649]: time="2026-01-14T13:38:41.957449593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:41.957991 kubelet[2966]: E0114 13:38:41.957872 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:41.957991 kubelet[2966]: E0114 13:38:41.957972 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:41.958450 kubelet[2966]: E0114 13:38:41.958172 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x6v8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-42gh4_calico-apiserver(23681fa6-27ca-4d7d-86fa-c674a7318b4d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:41.959387 kubelet[2966]: E0114 13:38:41.959336 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:38:44.645072 containerd[1649]: time="2026-01-14T13:38:44.645001097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 13:38:44.959081 containerd[1649]: time="2026-01-14T13:38:44.958867820Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:44.961191 containerd[1649]: time="2026-01-14T13:38:44.961137628Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 13:38:44.961706 containerd[1649]: time="2026-01-14T13:38:44.961166978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:44.961766 kubelet[2966]: E0114 13:38:44.961511 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:38:44.962543 kubelet[2966]: E0114 13:38:44.962277 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 13:38:44.962543 kubelet[2966]: E0114 13:38:44.962447 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:29af2dd3753940a0809dbeff5a3da59e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:44.965718 containerd[1649]: time="2026-01-14T13:38:44.965558005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 13:38:45.288456 containerd[1649]: time="2026-01-14T13:38:45.288289355Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:45.289475 containerd[1649]: time="2026-01-14T13:38:45.289412326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 13:38:45.289585 containerd[1649]: time="2026-01-14T13:38:45.289519818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:45.289816 kubelet[2966]: E0114 13:38:45.289731 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:38:45.289925 kubelet[2966]: E0114 13:38:45.289826 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 13:38:45.290048 kubelet[2966]: E0114 13:38:45.289987 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnqp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79b478485b-99m4n_calico-system(cd503a8a-4839-474d-9ad1-19916832d0a7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:45.291213 kubelet[2966]: E0114 13:38:45.291143 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:38:46.372994 systemd[1]: Started sshd@21-10.230.49.6:22-68.220.241.50:47726.service - OpenSSH per-connection server daemon (68.220.241.50:47726). Jan 14 13:38:46.376031 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:38:46.376153 kernel: audit: type=1130 audit(1768397926.371:846): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.49.6:22-68.220.241.50:47726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:46.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.49.6:22-68.220.241.50:47726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:46.927000 audit[5376]: USER_ACCT pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.934593 kernel: audit: type=1101 audit(1768397926.927:847): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.937698 sshd[5376]: Accepted publickey for core from 68.220.241.50 port 47726 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:46.938757 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:46.946769 kernel: audit: type=1103 audit(1768397926.934:848): pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.934000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.959295 kernel: audit: type=1006 audit(1768397926.934:849): pid=5376 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 14 13:38:46.958629 systemd-logind[1615]: New session 24 of user core. Jan 14 13:38:46.934000 audit[5376]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4fcbe0d0 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:46.966614 kernel: audit: type=1300 audit(1768397926.934:849): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4fcbe0d0 a2=3 a3=0 items=0 ppid=1 pid=5376 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:46.934000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:46.967859 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 13:38:46.970836 kernel: audit: type=1327 audit(1768397926.934:849): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:46.984593 kernel: audit: type=1105 audit(1768397926.974:850): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.974000 audit[5376]: USER_START pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.985000 audit[5380]: CRED_ACQ pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:46.990597 kernel: audit: type=1103 audit(1768397926.985:851): pid=5380 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.349611 sshd[5380]: Connection closed by 68.220.241.50 port 47726 Jan 14 13:38:47.351836 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:47.353000 audit[5376]: USER_END pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.360609 kernel: audit: type=1106 audit(1768397927.353:852): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.367198 kernel: audit: type=1104 audit(1768397927.353:853): pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.353000 audit[5376]: CRED_DISP pid=5376 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.362254 systemd[1]: sshd@21-10.230.49.6:22-68.220.241.50:47726.service: Deactivated successfully. Jan 14 13:38:47.367983 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 13:38:47.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.49.6:22-68.220.241.50:47726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:47.373516 systemd-logind[1615]: Session 24 logged out. Waiting for processes to exit. Jan 14 13:38:47.375359 systemd-logind[1615]: Removed session 24. Jan 14 13:38:47.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.49.6:22-68.220.241.50:47736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:47.450026 systemd[1]: Started sshd@22-10.230.49.6:22-68.220.241.50:47736.service - OpenSSH per-connection server daemon (68.220.241.50:47736). Jan 14 13:38:47.640200 kubelet[2966]: E0114 13:38:47.640031 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:38:47.953000 audit[5391]: USER_ACCT pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.955508 sshd[5391]: Accepted publickey for core from 68.220.241.50 port 47736 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:47.954000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.955000 audit[5391]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe3666d4e0 a2=3 a3=0 items=0 ppid=1 pid=5391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:47.955000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:47.958495 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:47.968785 systemd-logind[1615]: New session 25 of user core. Jan 14 13:38:47.971768 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 13:38:47.976000 audit[5391]: USER_START pid=5391 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:47.980000 audit[5395]: CRED_ACQ pid=5395 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:48.606776 sshd[5395]: Connection closed by 68.220.241.50 port 47736 Jan 14 13:38:48.608838 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:48.609000 audit[5391]: USER_END pid=5391 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:48.610000 audit[5391]: CRED_DISP pid=5391 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:48.624680 systemd[1]: sshd@22-10.230.49.6:22-68.220.241.50:47736.service: Deactivated successfully. Jan 14 13:38:48.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.49.6:22-68.220.241.50:47736 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:48.628899 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 13:38:48.631904 systemd-logind[1615]: Session 25 logged out. Waiting for processes to exit. Jan 14 13:38:48.633735 systemd-logind[1615]: Removed session 25. Jan 14 13:38:48.711974 systemd[1]: Started sshd@23-10.230.49.6:22-68.220.241.50:47738.service - OpenSSH per-connection server daemon (68.220.241.50:47738). Jan 14 13:38:48.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.49.6:22-68.220.241.50:47738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:49.258000 audit[5405]: USER_ACCT pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:49.260181 sshd[5405]: Accepted publickey for core from 68.220.241.50 port 47738 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:49.261000 audit[5405]: CRED_ACQ pid=5405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:49.261000 audit[5405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffaea86290 a2=3 a3=0 items=0 ppid=1 pid=5405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:49.265193 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:49.261000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:49.277451 systemd-logind[1615]: New session 26 of user core. Jan 14 13:38:49.283189 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 13:38:49.287000 audit[5405]: USER_START pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:49.290000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:50.439000 audit[5426]: NETFILTER_CFG table=filter:140 family=2 entries=26 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:38:50.439000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7fffae3508a0 a2=0 a3=7fffae35088c items=0 ppid=3099 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:50.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:38:50.450000 audit[5426]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5426 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:38:50.450000 audit[5426]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffae3508a0 a2=0 a3=0 items=0 ppid=3099 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:50.450000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:38:50.527000 audit[5428]: NETFILTER_CFG table=filter:142 family=2 entries=38 op=nft_register_rule pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:38:50.527000 audit[5428]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc1fc4aed0 a2=0 a3=7ffc1fc4aebc items=0 ppid=3099 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:50.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:38:50.532000 audit[5428]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:38:50.532000 audit[5428]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1fc4aed0 a2=0 a3=0 items=0 ppid=3099 pid=5428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:50.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:38:50.550723 sshd[5409]: Connection closed by 68.220.241.50 port 47738 Jan 14 13:38:50.556981 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:50.563000 audit[5405]: USER_END pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:50.564000 audit[5405]: CRED_DISP pid=5405 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:50.574204 systemd[1]: sshd@23-10.230.49.6:22-68.220.241.50:47738.service: Deactivated successfully. Jan 14 13:38:50.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.49.6:22-68.220.241.50:47738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:50.578444 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 13:38:50.581198 systemd-logind[1615]: Session 26 logged out. Waiting for processes to exit. Jan 14 13:38:50.584619 systemd-logind[1615]: Removed session 26. Jan 14 13:38:50.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.49.6:22-68.220.241.50:47746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:50.655657 systemd[1]: Started sshd@24-10.230.49.6:22-68.220.241.50:47746.service - OpenSSH per-connection server daemon (68.220.241.50:47746). Jan 14 13:38:51.210000 audit[5434]: USER_ACCT pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.212553 sshd[5434]: Accepted publickey for core from 68.220.241.50 port 47746 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:51.213000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.213000 audit[5434]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff92a2add0 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:51.213000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:51.216911 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:51.232666 systemd-logind[1615]: New session 27 of user core. Jan 14 13:38:51.238183 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 13:38:51.242000 audit[5434]: USER_START pid=5434 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.247000 audit[5438]: CRED_ACQ pid=5438 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.971725 sshd[5438]: Connection closed by 68.220.241.50 port 47746 Jan 14 13:38:51.972636 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:51.988836 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 13:38:51.989076 kernel: audit: type=1106 audit(1768397931.975:883): pid=5434 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.975000 audit[5434]: USER_END pid=5434 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:51.991940 systemd[1]: sshd@24-10.230.49.6:22-68.220.241.50:47746.service: Deactivated successfully. Jan 14 13:38:51.995820 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 13:38:51.975000 audit[5434]: CRED_DISP pid=5434 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.003121 kernel: audit: type=1104 audit(1768397931.975:884): pid=5434 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.003068 systemd-logind[1615]: Session 27 logged out. Waiting for processes to exit. Jan 14 13:38:51.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.49.6:22-68.220.241.50:47746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:52.010791 kernel: audit: type=1131 audit(1768397931.990:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.49.6:22-68.220.241.50:47746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:52.011783 systemd-logind[1615]: Removed session 27. Jan 14 13:38:52.088643 kernel: audit: type=1130 audit(1768397932.082:886): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.49.6:22-68.220.241.50:47762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:52.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.49.6:22-68.220.241.50:47762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:52.083404 systemd[1]: Started sshd@25-10.230.49.6:22-68.220.241.50:47762.service - OpenSSH per-connection server daemon (68.220.241.50:47762). Jan 14 13:38:52.628135 sshd[5448]: Accepted publickey for core from 68.220.241.50 port 47762 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:52.638654 kernel: audit: type=1101 audit(1768397932.626:887): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.626000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.640325 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:52.646305 kubelet[2966]: E0114 13:38:52.646242 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:38:52.636000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.654761 kernel: audit: type=1103 audit(1768397932.636:888): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.654845 kubelet[2966]: E0114 13:38:52.649513 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:38:52.662934 kernel: audit: type=1006 audit(1768397932.636:889): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 13:38:52.636000 audit[5448]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4f5c07e0 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:52.676258 kernel: audit: type=1300 audit(1768397932.636:889): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4f5c07e0 a2=3 a3=0 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:52.678895 systemd-logind[1615]: New session 28 of user core. Jan 14 13:38:52.685601 kernel: audit: type=1327 audit(1768397932.636:889): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:52.636000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:52.684667 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 13:38:52.691000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.699603 kernel: audit: type=1105 audit(1768397932.691:890): pid=5448 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:52.698000 audit[5452]: CRED_ACQ pid=5452 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:53.055103 sshd[5452]: Connection closed by 68.220.241.50 port 47762 Jan 14 13:38:53.056632 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:53.059000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:53.059000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:53.067438 systemd-logind[1615]: Session 28 logged out. Waiting for processes to exit. Jan 14 13:38:53.068160 systemd[1]: sshd@25-10.230.49.6:22-68.220.241.50:47762.service: Deactivated successfully. Jan 14 13:38:53.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.49.6:22-68.220.241.50:47762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:53.072104 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 13:38:53.075621 systemd-logind[1615]: Removed session 28. Jan 14 13:38:54.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.49.6:22-165.232.95.204:60120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:54.274841 systemd[1]: Started sshd@26-10.230.49.6:22-165.232.95.204:60120.service - OpenSSH per-connection server daemon (165.232.95.204:60120). Jan 14 13:38:54.600511 sshd[5464]: Connection closed by authenticating user root 165.232.95.204 port 60120 [preauth] Jan 14 13:38:54.599000 audit[5464]: USER_ERR pid=5464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=165.232.95.204 addr=165.232.95.204 terminal=ssh res=failed' Jan 14 13:38:54.603512 systemd[1]: sshd@26-10.230.49.6:22-165.232.95.204:60120.service: Deactivated successfully. Jan 14 13:38:54.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.49.6:22-165.232.95.204:60120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:54.647746 kubelet[2966]: E0114 13:38:54.647690 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:38:55.640412 containerd[1649]: time="2026-01-14T13:38:55.640341140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 13:38:55.982894 containerd[1649]: time="2026-01-14T13:38:55.980842014Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:55.982894 containerd[1649]: time="2026-01-14T13:38:55.982299756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 13:38:55.982894 containerd[1649]: time="2026-01-14T13:38:55.982417322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 13:38:55.983409 kubelet[2966]: E0114 13:38:55.983291 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:55.984039 kubelet[2966]: E0114 13:38:55.983422 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 13:38:55.984820 kubelet[2966]: E0114 13:38:55.984707 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qhl2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-556cb8cff8-69s5c_calico-apiserver(09be4418-7a52-4e16-b65e-453c324deb2d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:55.986329 kubelet[2966]: E0114 13:38:55.986008 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:38:57.641820 kubelet[2966]: E0114 13:38:57.641749 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:38:58.158063 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 13:38:58.158256 kernel: audit: type=1130 audit(1768397938.151:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.49.6:22-68.220.241.50:43614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:58.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.49.6:22-68.220.241.50:43614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:58.153335 systemd[1]: Started sshd@27-10.230.49.6:22-68.220.241.50:43614.service - OpenSSH per-connection server daemon (68.220.241.50:43614). Jan 14 13:38:58.663000 audit[5472]: USER_ACCT pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.671926 sshd[5472]: Accepted publickey for core from 68.220.241.50 port 43614 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:38:58.674604 kernel: audit: type=1101 audit(1768397938.663:899): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.674000 audit[5472]: CRED_ACQ pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.679483 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:38:58.682591 kernel: audit: type=1103 audit(1768397938.674:900): pid=5472 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.688593 kernel: audit: type=1006 audit(1768397938.674:901): pid=5472 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 13:38:58.674000 audit[5472]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc19cc740 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:58.696829 kernel: audit: type=1300 audit(1768397938.674:901): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcc19cc740 a2=3 a3=0 items=0 ppid=1 pid=5472 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:38:58.674000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:58.701595 kernel: audit: type=1327 audit(1768397938.674:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:38:58.703898 systemd-logind[1615]: New session 29 of user core. Jan 14 13:38:58.712329 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 13:38:58.720000 audit[5472]: USER_START pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.728637 kernel: audit: type=1105 audit(1768397938.720:902): pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.730000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:58.737713 kernel: audit: type=1103 audit(1768397938.730:903): pid=5476 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:59.085789 sshd[5476]: Connection closed by 68.220.241.50 port 43614 Jan 14 13:38:59.086853 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Jan 14 13:38:59.089000 audit[5472]: USER_END pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:59.100890 kernel: audit: type=1106 audit(1768397939.089:904): pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:59.101839 systemd[1]: sshd@27-10.230.49.6:22-68.220.241.50:43614.service: Deactivated successfully. Jan 14 13:38:59.089000 audit[5472]: CRED_DISP pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:59.106250 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 13:38:59.110266 kernel: audit: type=1104 audit(1768397939.089:905): pid=5472 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:38:59.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.49.6:22-68.220.241.50:43614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:38:59.110408 systemd-logind[1615]: Session 29 logged out. Waiting for processes to exit. Jan 14 13:38:59.113925 systemd-logind[1615]: Removed session 29. Jan 14 13:38:59.643065 containerd[1649]: time="2026-01-14T13:38:59.642756272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 13:38:59.974869 containerd[1649]: time="2026-01-14T13:38:59.974426139Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 13:38:59.976585 containerd[1649]: time="2026-01-14T13:38:59.975875126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 13:38:59.979608 kubelet[2966]: E0114 13:38:59.977019 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:38:59.979608 kubelet[2966]: E0114 13:38:59.977134 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 13:38:59.979608 kubelet[2966]: E0114 13:38:59.977466 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bfm86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wtc4j_calico-system(55d7c70a-6e6b-4527-8616-3cbcdf2d3394): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 13:38:59.979608 kubelet[2966]: E0114 13:38:59.978872 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:38:59.990242 containerd[1649]: time="2026-01-14T13:38:59.990195394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 13:39:01.570000 audit[5487]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:39:01.570000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc703ea450 a2=0 a3=7ffc703ea43c items=0 ppid=3099 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:01.570000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:39:01.578000 audit[5487]: NETFILTER_CFG table=nat:145 family=2 entries=104 op=nft_register_chain pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 13:39:01.578000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffc703ea450 a2=0 a3=7ffc703ea43c items=0 ppid=3099 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:01.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 13:39:04.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.49.6:22-68.220.241.50:56166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:04.192552 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 13:39:04.192674 kernel: audit: type=1130 audit(1768397944.185:909): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.49.6:22-68.220.241.50:56166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:04.186963 systemd[1]: Started sshd@28-10.230.49.6:22-68.220.241.50:56166.service - OpenSSH per-connection server daemon (68.220.241.50:56166). Jan 14 13:39:04.645297 kubelet[2966]: E0114 13:39:04.645236 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:39:04.710000 audit[5489]: USER_ACCT pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.716118 sshd[5489]: Accepted publickey for core from 68.220.241.50 port 56166 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:39:04.720904 kernel: audit: type=1101 audit(1768397944.710:910): pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.721418 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:04.716000 audit[5489]: CRED_ACQ pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.735205 kernel: audit: type=1103 audit(1768397944.716:911): pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.735406 kernel: audit: type=1006 audit(1768397944.716:912): pid=5489 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 13:39:04.740837 systemd-logind[1615]: New session 30 of user core. Jan 14 13:39:04.716000 audit[5489]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe69c8dce0 a2=3 a3=0 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:04.751638 kernel: audit: type=1300 audit(1768397944.716:912): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe69c8dce0 a2=3 a3=0 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:04.716000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:04.754669 kernel: audit: type=1327 audit(1768397944.716:912): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:04.755291 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 13:39:04.761000 audit[5489]: USER_START pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.774426 kernel: audit: type=1105 audit(1768397944.761:913): pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.774495 kernel: audit: type=1103 audit(1768397944.768:914): pid=5493 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:04.768000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:05.157834 sshd[5493]: Connection closed by 68.220.241.50 port 56166 Jan 14 13:39:05.158711 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:05.159000 audit[5489]: USER_END pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:05.167633 kernel: audit: type=1106 audit(1768397945.159:915): pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:05.166869 systemd[1]: sshd@28-10.230.49.6:22-68.220.241.50:56166.service: Deactivated successfully. Jan 14 13:39:05.159000 audit[5489]: CRED_DISP pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:05.173140 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 13:39:05.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.49.6:22-68.220.241.50:56166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:05.177751 kernel: audit: type=1104 audit(1768397945.159:916): pid=5489 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:05.176985 systemd-logind[1615]: Session 30 logged out. Waiting for processes to exit. Jan 14 13:39:05.180469 systemd-logind[1615]: Removed session 30. Jan 14 13:39:06.642146 kubelet[2966]: E0114 13:39:06.641721 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40" Jan 14 13:39:07.641822 kubelet[2966]: E0114 13:39:07.641484 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-42gh4" podUID="23681fa6-27ca-4d7d-86fa-c674a7318b4d" Jan 14 13:39:08.643388 kubelet[2966]: E0114 13:39:08.643322 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79b478485b-99m4n" podUID="cd503a8a-4839-474d-9ad1-19916832d0a7" Jan 14 13:39:10.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.49.6:22-68.220.241.50:56180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:10.263940 systemd[1]: Started sshd@29-10.230.49.6:22-68.220.241.50:56180.service - OpenSSH per-connection server daemon (68.220.241.50:56180). Jan 14 13:39:10.267358 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 13:39:10.267593 kernel: audit: type=1130 audit(1768397950.262:918): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.49.6:22-68.220.241.50:56180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:10.644593 kubelet[2966]: E0114 13:39:10.643642 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-556cb8cff8-69s5c" podUID="09be4418-7a52-4e16-b65e-453c324deb2d" Jan 14 13:39:10.787000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.790359 sshd[5504]: Accepted publickey for core from 68.220.241.50 port 56180 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:39:10.792779 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:10.795402 kernel: audit: type=1101 audit(1768397950.787:919): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.801589 kernel: audit: type=1103 audit(1768397950.790:920): pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.790000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.807586 kernel: audit: type=1006 audit(1768397950.790:921): pid=5504 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jan 14 13:39:10.790000 audit[5504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc98c6b20 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:10.818843 kernel: audit: type=1300 audit(1768397950.790:921): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffc98c6b20 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:10.821164 systemd-logind[1615]: New session 31 of user core. Jan 14 13:39:10.790000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:10.828621 kernel: audit: type=1327 audit(1768397950.790:921): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:10.828878 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 14 13:39:10.840000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.848597 kernel: audit: type=1105 audit(1768397950.840:922): pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.850000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:10.857591 kernel: audit: type=1103 audit(1768397950.850:923): pid=5508 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:11.179948 sshd[5508]: Connection closed by 68.220.241.50 port 56180 Jan 14 13:39:11.183683 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:11.184000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:11.194130 systemd[1]: sshd@29-10.230.49.6:22-68.220.241.50:56180.service: Deactivated successfully. Jan 14 13:39:11.194629 kernel: audit: type=1106 audit(1768397951.184:924): pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:11.197872 systemd[1]: session-31.scope: Deactivated successfully. Jan 14 13:39:11.184000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:11.205660 kernel: audit: type=1104 audit(1768397951.184:925): pid=5504 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:11.206163 systemd-logind[1615]: Session 31 logged out. Waiting for processes to exit. Jan 14 13:39:11.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.49.6:22-68.220.241.50:56180 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:11.211168 systemd-logind[1615]: Removed session 31. Jan 14 13:39:15.212584 systemd[1]: Started sshd@30-10.230.49.6:22-64.227.66.78:41510.service - OpenSSH per-connection server daemon (64.227.66.78:41510). Jan 14 13:39:15.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.49.6:22-64.227.66.78:41510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.273498 sshd[5544]: Connection closed by 64.227.66.78 port 41510 Jan 14 13:39:15.275462 systemd[1]: sshd@30-10.230.49.6:22-64.227.66.78:41510.service: Deactivated successfully. Jan 14 13:39:15.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.49.6:22-64.227.66.78:41510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.278723 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 14 13:39:15.278823 kernel: audit: type=1131 audit(1768397955.275:928): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.49.6:22-64.227.66.78:41510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:15.644303 kubelet[2966]: E0114 13:39:15.642807 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wtc4j" podUID="55d7c70a-6e6b-4527-8616-3cbcdf2d3394" Jan 14 13:39:16.286986 systemd[1]: Started sshd@31-10.230.49.6:22-68.220.241.50:36376.service - OpenSSH per-connection server daemon (68.220.241.50:36376). Jan 14 13:39:16.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.49.6:22-68.220.241.50:36376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.299293 kernel: audit: type=1130 audit(1768397956.285:929): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.49.6:22-68.220.241.50:36376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:16.644589 kubelet[2966]: E0114 13:39:16.644075 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7v2t7" podUID="1584f8ba-fd2c-4903-be8f-c6577809742f" Jan 14 13:39:16.801280 sshd[5549]: Accepted publickey for core from 68.220.241.50 port 36376 ssh2: RSA SHA256:NFG33gGE051qBigPiTwrkvX+lh1gzKLDT5Fv/Wp+eko Jan 14 13:39:16.799000 audit[5549]: USER_ACCT pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.807640 kernel: audit: type=1101 audit(1768397956.799:930): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.805000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.814402 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 13:39:16.814863 kernel: audit: type=1103 audit(1768397956.805:931): pid=5549 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.821633 kernel: audit: type=1006 audit(1768397956.805:932): pid=5549 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jan 14 13:39:16.805000 audit[5549]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0b7ae3a0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:16.827729 kernel: audit: type=1300 audit(1768397956.805:932): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd0b7ae3a0 a2=3 a3=0 items=0 ppid=1 pid=5549 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 13:39:16.805000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:16.832596 kernel: audit: type=1327 audit(1768397956.805:932): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 13:39:16.839547 systemd-logind[1615]: New session 32 of user core. Jan 14 13:39:16.845825 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 14 13:39:16.853000 audit[5549]: USER_START pid=5549 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.861597 kernel: audit: type=1105 audit(1768397956.853:933): pid=5549 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.863000 audit[5554]: CRED_ACQ pid=5554 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:16.874590 kernel: audit: type=1103 audit(1768397956.863:934): pid=5554 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:17.179274 sshd[5554]: Connection closed by 68.220.241.50 port 36376 Jan 14 13:39:17.181806 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 14 13:39:17.183000 audit[5549]: USER_END pid=5549 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:17.196612 kernel: audit: type=1106 audit(1768397957.183:935): pid=5549 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:17.194000 audit[5549]: CRED_DISP pid=5549 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 13:39:17.201633 systemd[1]: sshd@31-10.230.49.6:22-68.220.241.50:36376.service: Deactivated successfully. Jan 14 13:39:17.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.49.6:22-68.220.241.50:36376 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 13:39:17.208673 systemd[1]: session-32.scope: Deactivated successfully. Jan 14 13:39:17.214476 systemd-logind[1615]: Session 32 logged out. Waiting for processes to exit. Jan 14 13:39:17.216318 systemd-logind[1615]: Removed session 32. Jan 14 13:39:18.644591 kubelet[2966]: E0114 13:39:18.644246 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b65bb7cd-xwm2s" podUID="9af9fcdf-2905-4c21-b8a3-70ab543f6a40"