Jan 14 01:22:04.291567 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 13 22:15:29 -00 2026 Jan 14 01:22:04.291631 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:22:04.291646 kernel: BIOS-provided physical RAM map: Jan 14 01:22:04.291658 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 01:22:04.291674 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 01:22:04.291685 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 01:22:04.291698 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 14 01:22:04.291715 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 14 01:22:04.291727 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 01:22:04.291738 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 01:22:04.291750 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 01:22:04.291761 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 01:22:04.291772 kernel: NX (Execute Disable) protection: active Jan 14 01:22:04.291813 kernel: APIC: Static calls initialized Jan 14 01:22:04.291827 kernel: SMBIOS 2.8 present. Jan 14 01:22:04.291840 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 14 01:22:04.291852 kernel: DMI: Memory slots populated: 1/1 Jan 14 01:22:04.291870 kernel: Hypervisor detected: KVM Jan 14 01:22:04.291883 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 01:22:04.291895 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 01:22:04.291907 kernel: kvm-clock: using sched offset of 5056614581 cycles Jan 14 01:22:04.291920 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 01:22:04.291933 kernel: tsc: Detected 2499.998 MHz processor Jan 14 01:22:04.291946 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 01:22:04.291959 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 01:22:04.291976 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 01:22:04.291989 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 01:22:04.292001 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 01:22:04.292014 kernel: Using GB pages for direct mapping Jan 14 01:22:04.292026 kernel: ACPI: Early table checksum verification disabled Jan 14 01:22:04.292039 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 14 01:22:04.292051 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292064 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292081 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292093 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 14 01:22:04.292106 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292119 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292131 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292144 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 01:22:04.292157 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 14 01:22:04.292178 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 14 01:22:04.292191 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 14 01:22:04.292204 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 14 01:22:04.292217 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 14 01:22:04.292234 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 14 01:22:04.292247 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 14 01:22:04.292260 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 01:22:04.292273 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 14 01:22:04.292286 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 14 01:22:04.292299 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 14 01:22:04.292313 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 14 01:22:04.292329 kernel: Zone ranges: Jan 14 01:22:04.292343 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 01:22:04.292356 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 14 01:22:04.292369 kernel: Normal empty Jan 14 01:22:04.292382 kernel: Device empty Jan 14 01:22:04.292395 kernel: Movable zone start for each node Jan 14 01:22:04.292407 kernel: Early memory node ranges Jan 14 01:22:04.292420 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 01:22:04.292438 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 14 01:22:04.292451 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 14 01:22:04.292464 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 01:22:04.292477 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 01:22:04.292490 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 14 01:22:04.292503 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 01:22:04.292522 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 01:22:04.292541 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 01:22:04.292554 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 01:22:04.292567 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 01:22:04.292580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 01:22:04.292593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 01:22:04.292617 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 01:22:04.292632 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 01:22:04.292651 kernel: TSC deadline timer available Jan 14 01:22:04.292664 kernel: CPU topo: Max. logical packages: 16 Jan 14 01:22:04.292677 kernel: CPU topo: Max. logical dies: 16 Jan 14 01:22:04.292690 kernel: CPU topo: Max. dies per package: 1 Jan 14 01:22:04.292702 kernel: CPU topo: Max. threads per core: 1 Jan 14 01:22:04.292715 kernel: CPU topo: Num. cores per package: 1 Jan 14 01:22:04.292728 kernel: CPU topo: Num. threads per package: 1 Jan 14 01:22:04.292741 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 14 01:22:04.292758 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 01:22:04.292771 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 01:22:04.294436 kernel: Booting paravirtualized kernel on KVM Jan 14 01:22:04.294456 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 01:22:04.294482 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 14 01:22:04.294496 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 14 01:22:04.294509 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 14 01:22:04.294530 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 14 01:22:04.294543 kernel: kvm-guest: PV spinlocks enabled Jan 14 01:22:04.294556 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 01:22:04.294571 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:22:04.294585 kernel: random: crng init done Jan 14 01:22:04.294598 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 01:22:04.294624 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 01:22:04.294644 kernel: Fallback order for Node 0: 0 Jan 14 01:22:04.294657 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 14 01:22:04.294670 kernel: Policy zone: DMA32 Jan 14 01:22:04.294683 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 01:22:04.294696 kernel: software IO TLB: area num 16. Jan 14 01:22:04.294710 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 14 01:22:04.294723 kernel: Kernel/User page tables isolation: enabled Jan 14 01:22:04.294749 kernel: ftrace: allocating 40097 entries in 157 pages Jan 14 01:22:04.294762 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 01:22:04.294775 kernel: Dynamic Preempt: voluntary Jan 14 01:22:04.294815 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 01:22:04.294836 kernel: rcu: RCU event tracing is enabled. Jan 14 01:22:04.294851 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 14 01:22:04.294864 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 01:22:04.294884 kernel: Rude variant of Tasks RCU enabled. Jan 14 01:22:04.294897 kernel: Tracing variant of Tasks RCU enabled. Jan 14 01:22:04.294910 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 01:22:04.294924 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 14 01:22:04.294937 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 01:22:04.294950 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 01:22:04.294963 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 01:22:04.294987 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 14 01:22:04.295001 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 01:22:04.295030 kernel: Console: colour VGA+ 80x25 Jan 14 01:22:04.295048 kernel: printk: legacy console [tty0] enabled Jan 14 01:22:04.295061 kernel: printk: legacy console [ttyS0] enabled Jan 14 01:22:04.295079 kernel: ACPI: Core revision 20240827 Jan 14 01:22:04.295094 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 01:22:04.295107 kernel: x2apic enabled Jan 14 01:22:04.295121 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 01:22:04.295135 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 14 01:22:04.295154 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 14 01:22:04.295168 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 01:22:04.295181 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 01:22:04.295195 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 01:22:04.295212 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 01:22:04.295225 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 01:22:04.295238 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 01:22:04.295252 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 01:22:04.295265 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 01:22:04.295279 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 01:22:04.295292 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 01:22:04.295305 kernel: MMIO Stale Data: Unknown: No mitigations Jan 14 01:22:04.295318 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 14 01:22:04.295331 kernel: active return thunk: its_return_thunk Jan 14 01:22:04.295348 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 01:22:04.295362 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 01:22:04.295376 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 01:22:04.295389 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 01:22:04.295402 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 01:22:04.295416 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 01:22:04.295429 kernel: Freeing SMP alternatives memory: 32K Jan 14 01:22:04.295442 kernel: pid_max: default: 32768 minimum: 301 Jan 14 01:22:04.295455 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 01:22:04.295468 kernel: landlock: Up and running. Jan 14 01:22:04.295486 kernel: SELinux: Initializing. Jan 14 01:22:04.295499 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:22:04.295513 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 01:22:04.295526 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 14 01:22:04.295539 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 14 01:22:04.295553 kernel: signal: max sigframe size: 1776 Jan 14 01:22:04.295567 kernel: rcu: Hierarchical SRCU implementation. Jan 14 01:22:04.295581 kernel: rcu: Max phase no-delay instances is 400. Jan 14 01:22:04.295594 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 14 01:22:04.295623 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 01:22:04.295638 kernel: smp: Bringing up secondary CPUs ... Jan 14 01:22:04.295651 kernel: smpboot: x86: Booting SMP configuration: Jan 14 01:22:04.295665 kernel: .... node #0, CPUs: #1 Jan 14 01:22:04.295678 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 01:22:04.295692 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 14 01:22:04.295706 kernel: Memory: 1912048K/2096616K available (14336K kernel code, 2445K rwdata, 31636K rodata, 15536K init, 2504K bss, 178552K reserved, 0K cma-reserved) Jan 14 01:22:04.295726 kernel: devtmpfs: initialized Jan 14 01:22:04.295739 kernel: x86/mm: Memory block size: 128MB Jan 14 01:22:04.295753 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 01:22:04.295767 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 14 01:22:04.295793 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 01:22:04.295810 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 01:22:04.295824 kernel: audit: initializing netlink subsys (disabled) Jan 14 01:22:04.295844 kernel: audit: type=2000 audit(1768353720.196:1): state=initialized audit_enabled=0 res=1 Jan 14 01:22:04.295857 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 01:22:04.295871 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 01:22:04.295885 kernel: cpuidle: using governor menu Jan 14 01:22:04.295898 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 01:22:04.295912 kernel: dca service started, version 1.12.1 Jan 14 01:22:04.295932 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 01:22:04.295951 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 01:22:04.295965 kernel: PCI: Using configuration type 1 for base access Jan 14 01:22:04.295979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 01:22:04.295993 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 01:22:04.296007 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 01:22:04.296020 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 01:22:04.296034 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 01:22:04.296052 kernel: ACPI: Added _OSI(Module Device) Jan 14 01:22:04.296065 kernel: ACPI: Added _OSI(Processor Device) Jan 14 01:22:04.296079 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 01:22:04.296093 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 01:22:04.296106 kernel: ACPI: Interpreter enabled Jan 14 01:22:04.296120 kernel: ACPI: PM: (supports S0 S5) Jan 14 01:22:04.296133 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 01:22:04.296151 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 01:22:04.296165 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 01:22:04.296178 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 01:22:04.296192 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 01:22:04.296544 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 01:22:04.296924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 01:22:04.297182 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 01:22:04.297204 kernel: PCI host bridge to bus 0000:00 Jan 14 01:22:04.301746 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 01:22:04.302026 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 01:22:04.302248 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 01:22:04.302468 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 14 01:22:04.302707 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 01:22:04.306568 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 14 01:22:04.306830 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 01:22:04.307128 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 01:22:04.307380 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 14 01:22:04.307648 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 14 01:22:04.307904 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 14 01:22:04.308153 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 14 01:22:04.308388 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 01:22:04.308713 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.311767 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 14 01:22:04.312038 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 01:22:04.312276 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 01:22:04.312511 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 01:22:04.312892 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.313251 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 14 01:22:04.314117 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 01:22:04.314386 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 01:22:04.318986 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 01:22:04.319312 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.319553 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 14 01:22:04.320274 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 01:22:04.320568 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 01:22:04.320855 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 01:22:04.321149 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.321388 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 14 01:22:04.321635 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 01:22:04.323119 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 01:22:04.323806 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 01:22:04.324123 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.324395 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 14 01:22:04.324646 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 01:22:04.324906 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 01:22:04.325142 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 01:22:04.325405 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.325687 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 14 01:22:04.325977 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 01:22:04.326236 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 01:22:04.326488 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 01:22:04.326775 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.327158 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 14 01:22:04.327391 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 01:22:04.327660 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 01:22:04.327925 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 01:22:04.328320 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 01:22:04.328568 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 14 01:22:04.331033 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 01:22:04.331282 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 01:22:04.331521 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 01:22:04.331825 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 01:22:04.332067 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 01:22:04.332337 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 14 01:22:04.332592 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 14 01:22:04.334887 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 14 01:22:04.335167 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 01:22:04.335447 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 14 01:22:04.335717 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 14 01:22:04.335983 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 14 01:22:04.336269 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 01:22:04.336534 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 01:22:04.338741 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 01:22:04.339015 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 14 01:22:04.339267 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 14 01:22:04.339548 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 01:22:04.339832 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 01:22:04.340109 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 14 01:22:04.340373 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 14 01:22:04.340637 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 01:22:04.344931 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 01:22:04.345206 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 01:22:04.345496 kernel: pci_bus 0000:02: extended config space not accessible Jan 14 01:22:04.345828 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 14 01:22:04.346080 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 14 01:22:04.346319 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 01:22:04.346601 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 01:22:04.347384 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 14 01:22:04.347641 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 01:22:04.347937 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 01:22:04.348179 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 14 01:22:04.348421 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 01:22:04.348671 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 01:22:04.348922 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 01:22:04.349153 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 01:22:04.349383 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 01:22:04.349632 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 01:22:04.349662 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 01:22:04.349677 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 01:22:04.349691 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 01:22:04.349705 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 01:22:04.349719 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 01:22:04.349741 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 01:22:04.349756 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 01:22:04.349775 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 01:22:04.349819 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 01:22:04.349834 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 01:22:04.349848 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 01:22:04.349862 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 01:22:04.349876 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 01:22:04.349889 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 01:22:04.349910 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 01:22:04.349923 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 01:22:04.349937 kernel: iommu: Default domain type: Translated Jan 14 01:22:04.349951 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 01:22:04.349966 kernel: PCI: Using ACPI for IRQ routing Jan 14 01:22:04.349980 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 01:22:04.349994 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 01:22:04.350012 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 14 01:22:04.350261 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 01:22:04.350493 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 01:22:04.350761 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 01:22:04.350798 kernel: vgaarb: loaded Jan 14 01:22:04.350815 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 01:22:04.350829 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 01:22:04.350851 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 01:22:04.350865 kernel: pnp: PnP ACPI init Jan 14 01:22:04.351179 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 01:22:04.351215 kernel: pnp: PnP ACPI: found 5 devices Jan 14 01:22:04.351229 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 01:22:04.351243 kernel: NET: Registered PF_INET protocol family Jan 14 01:22:04.351264 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 01:22:04.351278 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 01:22:04.351292 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 01:22:04.351306 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 01:22:04.351320 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 01:22:04.351335 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 01:22:04.351348 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:22:04.351366 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 01:22:04.351381 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 01:22:04.351394 kernel: NET: Registered PF_XDP protocol family Jan 14 01:22:04.351634 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 14 01:22:04.351883 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 01:22:04.352114 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 01:22:04.352344 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 01:22:04.352581 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 01:22:04.352873 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 01:22:04.353106 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 01:22:04.353356 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 01:22:04.353585 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 01:22:04.353849 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 01:22:04.354108 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 01:22:04.354336 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 01:22:04.354565 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 01:22:04.354824 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 01:22:04.355086 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 01:22:04.355327 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 01:22:04.355561 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 01:22:04.355862 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 01:22:04.356101 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 01:22:04.356352 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 01:22:04.356581 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 01:22:04.356857 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 01:22:04.357095 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 01:22:04.357358 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 01:22:04.357615 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 01:22:04.357867 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 01:22:04.358109 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 01:22:04.358366 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 01:22:04.358593 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 01:22:04.358868 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 01:22:04.359114 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 01:22:04.359364 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 01:22:04.359592 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 01:22:04.359854 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 01:22:04.360112 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 01:22:04.360353 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 01:22:04.360582 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 01:22:04.360854 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 01:22:04.361125 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 01:22:04.361372 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 01:22:04.361650 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 01:22:04.361910 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 01:22:04.362136 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 01:22:04.362378 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 01:22:04.362632 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 01:22:04.362893 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 01:22:04.363138 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 01:22:04.363381 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 01:22:04.363643 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 01:22:04.363905 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 01:22:04.364142 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 01:22:04.364367 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 01:22:04.364585 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 01:22:04.364840 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 14 01:22:04.365090 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 01:22:04.365310 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 14 01:22:04.365578 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 01:22:04.365828 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 14 01:22:04.366074 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 01:22:04.366300 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 14 01:22:04.366627 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 14 01:22:04.366872 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 14 01:22:04.367092 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 01:22:04.367339 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 14 01:22:04.367559 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 14 01:22:04.367820 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 01:22:04.368060 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 14 01:22:04.368280 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 14 01:22:04.368497 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 01:22:04.368755 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 14 01:22:04.368996 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 14 01:22:04.369232 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 01:22:04.369479 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 14 01:22:04.369715 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 14 01:22:04.369957 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 01:22:04.370202 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 14 01:22:04.370444 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 14 01:22:04.370699 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 01:22:04.370970 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 14 01:22:04.371191 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 14 01:22:04.371442 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 01:22:04.371464 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 01:22:04.371491 kernel: PCI: CLS 0 bytes, default 64 Jan 14 01:22:04.371512 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 01:22:04.371526 kernel: software IO TLB: mapped [mem 0x0000000075000000-0x0000000079000000] (64MB) Jan 14 01:22:04.371540 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 01:22:04.371554 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 14 01:22:04.371567 kernel: Initialise system trusted keyrings Jan 14 01:22:04.371581 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 01:22:04.371628 kernel: Key type asymmetric registered Jan 14 01:22:04.371643 kernel: Asymmetric key parser 'x509' registered Jan 14 01:22:04.371658 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 01:22:04.371673 kernel: io scheduler mq-deadline registered Jan 14 01:22:04.371687 kernel: io scheduler kyber registered Jan 14 01:22:04.371702 kernel: io scheduler bfq registered Jan 14 01:22:04.371953 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 14 01:22:04.372209 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 14 01:22:04.372459 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.372705 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 14 01:22:04.372954 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 14 01:22:04.373197 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.373441 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 14 01:22:04.373696 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 14 01:22:04.373949 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.374199 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 14 01:22:04.374460 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 14 01:22:04.374715 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.374987 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 14 01:22:04.375241 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 14 01:22:04.375490 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.375735 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 14 01:22:04.375993 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 14 01:22:04.376254 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.376469 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 14 01:22:04.376726 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 14 01:22:04.376974 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.377214 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 14 01:22:04.377477 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 14 01:22:04.377726 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 01:22:04.377750 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 01:22:04.377766 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 01:22:04.377796 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 01:22:04.377813 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 01:22:04.377835 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 01:22:04.377850 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 01:22:04.377864 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 01:22:04.377879 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 01:22:04.378140 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 01:22:04.378163 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 01:22:04.378418 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 01:22:04.378657 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T01:22:02 UTC (1768353722) Jan 14 01:22:04.378923 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 14 01:22:04.378959 kernel: intel_pstate: CPU model not supported Jan 14 01:22:04.378974 kernel: NET: Registered PF_INET6 protocol family Jan 14 01:22:04.378988 kernel: Segment Routing with IPv6 Jan 14 01:22:04.379002 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 01:22:04.379033 kernel: NET: Registered PF_PACKET protocol family Jan 14 01:22:04.379048 kernel: Key type dns_resolver registered Jan 14 01:22:04.379063 kernel: IPI shorthand broadcast: enabled Jan 14 01:22:04.379077 kernel: sched_clock: Marking stable (2118013604, 229893283)->(2580521733, -232614846) Jan 14 01:22:04.379092 kernel: registered taskstats version 1 Jan 14 01:22:04.379106 kernel: Loading compiled-in X.509 certificates Jan 14 01:22:04.379121 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 58a78462583b088d099087e6f2d97e37d80e06bb' Jan 14 01:22:04.379140 kernel: Demotion targets for Node 0: null Jan 14 01:22:04.379171 kernel: Key type .fscrypt registered Jan 14 01:22:04.379184 kernel: Key type fscrypt-provisioning registered Jan 14 01:22:04.379198 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 01:22:04.379212 kernel: ima: Allocated hash algorithm: sha1 Jan 14 01:22:04.379238 kernel: ima: No architecture policies found Jan 14 01:22:04.379252 kernel: clk: Disabling unused clocks Jan 14 01:22:04.379265 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 01:22:04.379283 kernel: Write protecting the kernel read-only data: 47104k Jan 14 01:22:04.379306 kernel: Freeing unused kernel image (rodata/data gap) memory: 1132K Jan 14 01:22:04.379320 kernel: Run /init as init process Jan 14 01:22:04.379333 kernel: with arguments: Jan 14 01:22:04.379346 kernel: /init Jan 14 01:22:04.379360 kernel: with environment: Jan 14 01:22:04.379376 kernel: HOME=/ Jan 14 01:22:04.379394 kernel: TERM=linux Jan 14 01:22:04.379407 kernel: ACPI: bus type USB registered Jan 14 01:22:04.379421 kernel: usbcore: registered new interface driver usbfs Jan 14 01:22:04.379446 kernel: usbcore: registered new interface driver hub Jan 14 01:22:04.379459 kernel: usbcore: registered new device driver usb Jan 14 01:22:04.379729 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 01:22:04.379988 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 14 01:22:04.380253 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 01:22:04.380491 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 01:22:04.380763 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 14 01:22:04.381016 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 14 01:22:04.381332 kernel: hub 1-0:1.0: USB hub found Jan 14 01:22:04.381616 kernel: hub 1-0:1.0: 4 ports detected Jan 14 01:22:04.381923 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 01:22:04.382197 kernel: hub 2-0:1.0: USB hub found Jan 14 01:22:04.382461 kernel: hub 2-0:1.0: 4 ports detected Jan 14 01:22:04.382482 kernel: SCSI subsystem initialized Jan 14 01:22:04.382496 kernel: libata version 3.00 loaded. Jan 14 01:22:04.382772 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 01:22:04.382822 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 01:22:04.383067 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 01:22:04.383308 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 01:22:04.383546 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 01:22:04.383880 kernel: scsi host0: ahci Jan 14 01:22:04.384144 kernel: scsi host1: ahci Jan 14 01:22:04.384431 kernel: scsi host2: ahci Jan 14 01:22:04.384714 kernel: scsi host3: ahci Jan 14 01:22:04.385006 kernel: scsi host4: ahci Jan 14 01:22:04.385257 kernel: scsi host5: ahci Jan 14 01:22:04.385296 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 14 01:22:04.385312 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 14 01:22:04.385327 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 14 01:22:04.385341 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 14 01:22:04.385356 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 14 01:22:04.385370 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 14 01:22:04.385654 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 01:22:04.385694 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 01:22:04.385708 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.385723 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.385737 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.385752 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.385767 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.385808 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 14 01:22:04.386102 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 14 01:22:04.386126 kernel: usbcore: registered new interface driver usbhid Jan 14 01:22:04.386141 kernel: usbhid: USB HID core driver Jan 14 01:22:04.386370 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 14 01:22:04.386393 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 14 01:22:04.386422 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 01:22:04.386729 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 14 01:22:04.386754 kernel: GPT:25804799 != 125829119 Jan 14 01:22:04.386769 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 01:22:04.386810 kernel: GPT:25804799 != 125829119 Jan 14 01:22:04.386826 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 01:22:04.386841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 01:22:04.386871 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 01:22:04.386886 kernel: device-mapper: uevent: version 1.0.3 Jan 14 01:22:04.386901 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 01:22:04.386916 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 01:22:04.386930 kernel: raid6: sse2x4 gen() 13256 MB/s Jan 14 01:22:04.386945 kernel: raid6: sse2x2 gen() 9521 MB/s Jan 14 01:22:04.386960 kernel: raid6: sse2x1 gen() 9909 MB/s Jan 14 01:22:04.386985 kernel: raid6: using algorithm sse2x4 gen() 13256 MB/s Jan 14 01:22:04.387000 kernel: raid6: .... xor() 7582 MB/s, rmw enabled Jan 14 01:22:04.387015 kernel: raid6: using ssse3x2 recovery algorithm Jan 14 01:22:04.387029 kernel: xor: automatically using best checksumming function avx Jan 14 01:22:04.387044 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 01:22:04.387058 kernel: BTRFS: device fsid 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac devid 1 transid 33 /dev/mapper/usr (253:0) scanned by mount (194) Jan 14 01:22:04.387073 kernel: BTRFS info (device dm-0): first mount of filesystem 315c4ba2-2b68-4ff5-9a58-ddeab520c9ac Jan 14 01:22:04.387098 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:22:04.387113 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 01:22:04.387128 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 01:22:04.387142 kernel: loop: module loaded Jan 14 01:22:04.387157 kernel: loop0: detected capacity change from 0 to 100552 Jan 14 01:22:04.387172 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 01:22:04.387192 systemd[1]: Successfully made /usr/ read-only. Jan 14 01:22:04.387221 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:22:04.387237 systemd[1]: Detected virtualization kvm. Jan 14 01:22:04.387252 systemd[1]: Detected architecture x86-64. Jan 14 01:22:04.387267 systemd[1]: Running in initrd. Jan 14 01:22:04.387281 systemd[1]: No hostname configured, using default hostname. Jan 14 01:22:04.387308 systemd[1]: Hostname set to . Jan 14 01:22:04.387336 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:22:04.387350 systemd[1]: Queued start job for default target initrd.target. Jan 14 01:22:04.387365 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:22:04.387379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:22:04.387407 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:22:04.387422 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 01:22:04.387449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:22:04.387465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 01:22:04.387481 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 01:22:04.387496 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:22:04.387511 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:22:04.387536 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:22:04.387552 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:22:04.387567 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:22:04.387582 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:22:04.387597 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:22:04.387624 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:22:04.387639 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:22:04.387666 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:22:04.387682 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 01:22:04.387697 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 01:22:04.387712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:22:04.387727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:22:04.387742 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:22:04.387757 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:22:04.387798 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 01:22:04.387816 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 01:22:04.387832 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:22:04.387848 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 01:22:04.387864 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 01:22:04.387879 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 01:22:04.387894 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:22:04.387922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:22:04.387938 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:22:04.387954 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 01:22:04.387979 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:22:04.387995 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 01:22:04.388057 systemd-journald[331]: Collecting audit messages is enabled. Jan 14 01:22:04.388103 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 01:22:04.388119 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 01:22:04.388135 kernel: Bridge firewalling registered Jan 14 01:22:04.388150 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:22:04.388166 systemd-journald[331]: Journal started Jan 14 01:22:04.388197 systemd-journald[331]: Runtime Journal (/run/log/journal/8c4b0c3776f34b28b4a72441f5a91189) is 4.7M, max 37.7M, 33M free. Jan 14 01:22:04.322198 systemd-modules-load[333]: Inserted module 'br_netfilter' Jan 14 01:22:04.398825 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:22:04.398872 kernel: audit: type=1130 audit(1768353724.390:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.405987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:22:04.407899 kernel: audit: type=1130 audit(1768353724.400:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.413038 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 01:22:04.414199 kernel: audit: type=1130 audit(1768353724.407:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.422810 kernel: audit: type=1130 audit(1768353724.414:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.424760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 01:22:04.428950 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:22:04.435954 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:22:04.442894 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:22:04.461467 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:22:04.462143 systemd-tmpfiles[352]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 01:22:04.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.471821 kernel: audit: type=1130 audit(1768353724.465:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.472085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:22:04.475100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:22:04.478684 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:22:04.466000 audit: BPF prog-id=6 op=LOAD Jan 14 01:22:04.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.482928 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:22:04.493766 kernel: audit: type=1334 audit(1768353724.466:7): prog-id=6 op=LOAD Jan 14 01:22:04.493838 kernel: audit: type=1130 audit(1768353724.476:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.493884 kernel: audit: type=1130 audit(1768353724.479:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.497962 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 01:22:04.501336 kernel: audit: type=1130 audit(1768353724.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.535172 dracut-cmdline[371]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=6d34ab71a3dc5a0ab37eb2c851228af18a1e24f648223df9a1099dbd7db2cfcf Jan 14 01:22:04.564413 systemd-resolved[364]: Positive Trust Anchors: Jan 14 01:22:04.565489 systemd-resolved[364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:22:04.565501 systemd-resolved[364]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:22:04.565545 systemd-resolved[364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:22:04.597499 systemd-resolved[364]: Defaulting to hostname 'linux'. Jan 14 01:22:04.600219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:22:04.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.602300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:22:04.681834 kernel: Loading iSCSI transport class v2.0-870. Jan 14 01:22:04.699821 kernel: iscsi: registered transport (tcp) Jan 14 01:22:04.730064 kernel: iscsi: registered transport (qla4xxx) Jan 14 01:22:04.730175 kernel: QLogic iSCSI HBA Driver Jan 14 01:22:04.767729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:22:04.792930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:22:04.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.796611 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:22:04.866655 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 01:22:04.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.870216 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 01:22:04.872952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 01:22:04.922870 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:22:04.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.924000 audit: BPF prog-id=7 op=LOAD Jan 14 01:22:04.924000 audit: BPF prog-id=8 op=LOAD Jan 14 01:22:04.926100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:22:04.962237 systemd-udevd[606]: Using default interface naming scheme 'v257'. Jan 14 01:22:04.978948 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:22:04.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:04.983437 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 01:22:05.023393 dracut-pre-trigger[672]: rd.md=0: removing MD RAID activation Jan 14 01:22:05.029987 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:22:05.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.031000 audit: BPF prog-id=9 op=LOAD Jan 14 01:22:05.034267 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:22:05.071189 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:22:05.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.076992 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:22:05.100645 systemd-networkd[720]: lo: Link UP Jan 14 01:22:05.100657 systemd-networkd[720]: lo: Gained carrier Jan 14 01:22:05.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.102150 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:22:05.103300 systemd[1]: Reached target network.target - Network. Jan 14 01:22:05.236335 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:22:05.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.241007 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 01:22:05.374564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 01:22:05.402124 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 01:22:05.426640 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 01:22:05.443312 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:22:05.446612 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 01:22:05.482886 disk-uuid[775]: Primary Header is updated. Jan 14 01:22:05.482886 disk-uuid[775]: Secondary Entries is updated. Jan 14 01:22:05.482886 disk-uuid[775]: Secondary Header is updated. Jan 14 01:22:05.538820 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 01:22:05.582872 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 01:22:05.611620 systemd-networkd[720]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:22:05.611634 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:22:05.612858 systemd-networkd[720]: eth0: Link UP Jan 14 01:22:05.613295 systemd-networkd[720]: eth0: Gained carrier Jan 14 01:22:05.613310 systemd-networkd[720]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:22:05.631074 systemd-networkd[720]: eth0: DHCPv4 address 10.230.32.214/30, gateway 10.230.32.213 acquired from 10.230.32.213 Jan 14 01:22:05.633525 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:22:05.633730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:22:05.646530 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:22:05.646563 kernel: audit: type=1131 audit(1768353725.634:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.635325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:22:05.647809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:22:05.657811 kernel: AES CTR mode by8 optimization enabled Jan 14 01:22:05.800050 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:22:05.806813 kernel: audit: type=1130 audit(1768353725.800:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.807661 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 01:22:05.814113 kernel: audit: type=1130 audit(1768353725.807:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.809494 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:22:05.814824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:22:05.816461 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:22:05.819620 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 01:22:05.853778 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:22:05.860157 kernel: audit: type=1130 audit(1768353725.853:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:05.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.552295 disk-uuid[776]: Warning: The kernel is still using the old partition table. Jan 14 01:22:06.552295 disk-uuid[776]: The new table will be used at the next reboot or after you Jan 14 01:22:06.552295 disk-uuid[776]: run partprobe(8) or kpartx(8) Jan 14 01:22:06.552295 disk-uuid[776]: The operation has completed successfully. Jan 14 01:22:06.560207 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 01:22:06.571318 kernel: audit: type=1130 audit(1768353726.560:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.571356 kernel: audit: type=1131 audit(1768353726.560:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.560476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 01:22:06.564027 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 01:22:06.609826 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (860) Jan 14 01:22:06.619231 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:22:06.619297 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:22:06.626636 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:22:06.626676 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:22:06.635856 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:22:06.637185 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 01:22:06.643410 kernel: audit: type=1130 audit(1768353726.637:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.639755 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 01:22:06.862380 ignition[879]: Ignition 2.24.0 Jan 14 01:22:06.862406 ignition[879]: Stage: fetch-offline Jan 14 01:22:06.862523 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:06.866086 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:22:06.873046 kernel: audit: type=1130 audit(1768353726.866:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.862579 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:06.869278 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 01:22:06.862738 ignition[879]: parsed url from cmdline: "" Jan 14 01:22:06.862744 ignition[879]: no config URL provided Jan 14 01:22:06.862754 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:22:06.862773 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:22:06.862797 ignition[879]: failed to fetch config: resource requires networking Jan 14 01:22:06.864006 ignition[879]: Ignition finished successfully Jan 14 01:22:06.903526 ignition[886]: Ignition 2.24.0 Jan 14 01:22:06.903558 ignition[886]: Stage: fetch Jan 14 01:22:06.903844 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:06.903862 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:06.904022 ignition[886]: parsed url from cmdline: "" Jan 14 01:22:06.904029 ignition[886]: no config URL provided Jan 14 01:22:06.904055 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 01:22:06.904069 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 14 01:22:06.904215 ignition[886]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 01:22:06.904254 ignition[886]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 01:22:06.904360 ignition[886]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 01:22:06.920060 ignition[886]: GET result: OK Jan 14 01:22:06.920868 ignition[886]: parsing config with SHA512: ea6764573ee93e0612d7e6ca98ac83d0ddf7e5c46d0e1e2c982882064a64b245dcf23e6f74ed3c84b1e3e8d03be71c5b6010ced7f30df349b7dcbe3f5a2c6aec Jan 14 01:22:06.929387 unknown[886]: fetched base config from "system" Jan 14 01:22:06.930291 unknown[886]: fetched base config from "system" Jan 14 01:22:06.931104 unknown[886]: fetched user config from "openstack" Jan 14 01:22:06.932328 ignition[886]: fetch: fetch complete Jan 14 01:22:06.932337 ignition[886]: fetch: fetch passed Jan 14 01:22:06.932409 ignition[886]: Ignition finished successfully Jan 14 01:22:06.937033 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 01:22:06.943401 kernel: audit: type=1130 audit(1768353726.937:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.939687 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 01:22:06.975654 ignition[892]: Ignition 2.24.0 Jan 14 01:22:06.975679 ignition[892]: Stage: kargs Jan 14 01:22:06.975939 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:06.975958 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:06.977932 ignition[892]: kargs: kargs passed Jan 14 01:22:06.978022 ignition[892]: Ignition finished successfully Jan 14 01:22:06.982197 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 01:22:06.988435 kernel: audit: type=1130 audit(1768353726.982:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:06.985960 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 01:22:07.017281 ignition[898]: Ignition 2.24.0 Jan 14 01:22:07.017302 ignition[898]: Stage: disks Jan 14 01:22:07.017575 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:07.017593 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:07.022299 ignition[898]: disks: disks passed Jan 14 01:22:07.023000 ignition[898]: Ignition finished successfully Jan 14 01:22:07.024865 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 01:22:07.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.025956 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 01:22:07.027141 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 01:22:07.028720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:22:07.030287 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:22:07.031620 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:22:07.034505 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 01:22:07.074694 systemd-fsck[906]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 01:22:07.081063 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 01:22:07.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.083333 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 01:22:07.221057 systemd-networkd[720]: eth0: Gained IPv6LL Jan 14 01:22:07.233809 kernel: EXT4-fs (vda9): mounted filesystem 6efdc615-0e3c-4caf-8d0b-1f38e5c59ef0 r/w with ordered data mode. Quota mode: none. Jan 14 01:22:07.235065 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 01:22:07.236362 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 01:22:07.239733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:22:07.243898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 01:22:07.247329 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 01:22:07.252408 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 01:22:07.255175 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 01:22:07.255223 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:22:07.260056 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 01:22:07.264820 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 14 01:22:07.265035 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 01:22:07.268862 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:22:07.268910 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:22:07.276157 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:22:07.276214 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:22:07.281246 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:22:07.358817 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:07.518083 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 01:22:07.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.522949 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 01:22:07.527558 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 01:22:07.556812 kernel: BTRFS info (device vda6): last unmount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:22:07.570218 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 01:22:07.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.590425 ignition[1017]: INFO : Ignition 2.24.0 Jan 14 01:22:07.590425 ignition[1017]: INFO : Stage: mount Jan 14 01:22:07.592246 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:07.592246 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:07.592246 ignition[1017]: INFO : mount: mount passed Jan 14 01:22:07.592246 ignition[1017]: INFO : Ignition finished successfully Jan 14 01:22:07.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:07.593480 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 01:22:07.596334 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 01:22:08.393827 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:08.728331 systemd-networkd[720]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8835:24:19ff:fee6:20d6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8835:24:19ff:fee6:20d6/64 assigned by NDisc. Jan 14 01:22:08.728344 systemd-networkd[720]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 01:22:10.404827 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:14.418904 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:14.425624 coreos-metadata[916]: Jan 14 01:22:14.425 WARN failed to locate config-drive, using the metadata service API instead Jan 14 01:22:14.450892 coreos-metadata[916]: Jan 14 01:22:14.450 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 01:22:14.464369 coreos-metadata[916]: Jan 14 01:22:14.464 INFO Fetch successful Jan 14 01:22:14.465422 coreos-metadata[916]: Jan 14 01:22:14.465 INFO wrote hostname srv-aufav.gb1.brightbox.com to /sysroot/etc/hostname Jan 14 01:22:14.467658 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 01:22:14.482198 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 01:22:14.482234 kernel: audit: type=1130 audit(1768353734.469:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.482278 kernel: audit: type=1131 audit(1768353734.469:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:14.467898 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 01:22:14.472156 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 01:22:14.497712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 01:22:14.523818 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1032) Jan 14 01:22:14.527801 kernel: BTRFS info (device vda6): first mount of filesystem 87cf3d96-2540-4b91-98c0-7ae2e759a282 Jan 14 01:22:14.527844 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 01:22:14.534511 kernel: BTRFS info (device vda6): turning on async discard Jan 14 01:22:14.534574 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 01:22:14.537563 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 01:22:14.571408 ignition[1049]: INFO : Ignition 2.24.0 Jan 14 01:22:14.574066 ignition[1049]: INFO : Stage: files Jan 14 01:22:14.574066 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:14.574066 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:14.576592 ignition[1049]: DEBUG : files: compiled without relabeling support, skipping Jan 14 01:22:14.577472 ignition[1049]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 01:22:14.577472 ignition[1049]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 01:22:14.584002 ignition[1049]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 01:22:14.585230 ignition[1049]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 01:22:14.587146 unknown[1049]: wrote ssh authorized keys file for user: core Jan 14 01:22:14.588353 ignition[1049]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 01:22:14.590803 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:22:14.590803 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 01:22:14.780030 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 01:22:15.079092 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 01:22:15.079092 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:22:15.082072 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 01:22:15.090349 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:22:15.090349 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 01:22:15.090349 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:22:15.097799 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:22:15.097799 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:22:15.100639 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 01:22:15.378485 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 01:22:17.135950 ignition[1049]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 01:22:17.135950 ignition[1049]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 01:22:17.139428 ignition[1049]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:22:17.140778 ignition[1049]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 01:22:17.140778 ignition[1049]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 01:22:17.140778 ignition[1049]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 01:22:17.140778 ignition[1049]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 01:22:17.153408 kernel: audit: type=1130 audit(1768353737.146:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.153594 ignition[1049]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:22:17.153594 ignition[1049]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 01:22:17.153594 ignition[1049]: INFO : files: files passed Jan 14 01:22:17.153594 ignition[1049]: INFO : Ignition finished successfully Jan 14 01:22:17.144105 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 01:22:17.153027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 01:22:17.157020 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 01:22:17.183244 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 01:22:17.184912 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 01:22:17.197912 kernel: audit: type=1130 audit(1768353737.185:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.197966 kernel: audit: type=1131 audit(1768353737.185:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.201231 initrd-setup-root-after-ignition[1081]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:22:17.201231 initrd-setup-root-after-ignition[1081]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:22:17.203760 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 01:22:17.206176 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:22:17.213230 kernel: audit: type=1130 audit(1768353737.206:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.207662 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 01:22:17.215674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 01:22:17.270751 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 01:22:17.270970 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 01:22:17.283255 kernel: audit: type=1130 audit(1768353737.271:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.283302 kernel: audit: type=1131 audit(1768353737.271:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.272924 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 01:22:17.283957 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 01:22:17.285709 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 01:22:17.287122 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 01:22:17.332154 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:22:17.338708 kernel: audit: type=1130 audit(1768353737.332:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.335993 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 01:22:17.371181 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 01:22:17.371588 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:22:17.373287 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:22:17.375111 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 01:22:17.376529 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 01:22:17.383531 kernel: audit: type=1131 audit(1768353737.377:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.376713 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 01:22:17.383453 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 01:22:17.384420 systemd[1]: Stopped target basic.target - Basic System. Jan 14 01:22:17.385876 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 01:22:17.387225 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 01:22:17.388663 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 01:22:17.390362 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 01:22:17.391913 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 01:22:17.393519 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 01:22:17.395137 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 01:22:17.396474 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 01:22:17.398087 systemd[1]: Stopped target swap.target - Swaps. Jan 14 01:22:17.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.399409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 01:22:17.399605 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 01:22:17.401318 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:22:17.402290 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:22:17.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.403533 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 01:22:17.404023 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:22:17.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.405214 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 01:22:17.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.405478 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 01:22:17.407356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 01:22:17.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.407551 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 01:22:17.409407 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 01:22:17.409580 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 01:22:17.412226 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 01:22:17.414487 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 01:22:17.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.414690 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:22:17.419604 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 01:22:17.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.420870 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 01:22:17.421141 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:22:17.422551 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 01:22:17.424832 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:22:17.425932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 01:22:17.427929 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 01:22:17.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.445385 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 01:22:17.446957 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 01:22:17.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.461345 ignition[1105]: INFO : Ignition 2.24.0 Jan 14 01:22:17.461345 ignition[1105]: INFO : Stage: umount Jan 14 01:22:17.463163 ignition[1105]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 01:22:17.463163 ignition[1105]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 01:22:17.466202 ignition[1105]: INFO : umount: umount passed Jan 14 01:22:17.466202 ignition[1105]: INFO : Ignition finished successfully Jan 14 01:22:17.467465 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 01:22:17.467720 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 01:22:17.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.471453 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 01:22:17.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.471671 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 01:22:17.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.472840 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 01:22:17.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.472921 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 01:22:17.474916 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 01:22:17.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.475012 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 01:22:17.476292 systemd[1]: Stopped target network.target - Network. Jan 14 01:22:17.479644 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 01:22:17.479728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 01:22:17.480503 systemd[1]: Stopped target paths.target - Path Units. Jan 14 01:22:17.481125 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 01:22:17.482550 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:22:17.483615 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 01:22:17.484235 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 01:22:17.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.486815 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 01:22:17.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.486922 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 01:22:17.488113 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 01:22:17.488176 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 01:22:17.489609 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 01:22:17.489673 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:22:17.491355 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 01:22:17.491454 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 01:22:17.492762 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 01:22:17.492857 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 01:22:17.494302 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 01:22:17.496358 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 01:22:17.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.510000 audit: BPF prog-id=6 op=UNLOAD Jan 14 01:22:17.500796 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 01:22:17.508368 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 01:22:17.508706 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 01:22:17.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.511864 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 01:22:17.512045 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 01:22:17.515000 audit: BPF prog-id=9 op=UNLOAD Jan 14 01:22:17.516327 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 01:22:17.523047 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 01:22:17.523123 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:22:17.525666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 01:22:17.527129 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 01:22:17.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.527222 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 01:22:17.529366 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 01:22:17.529473 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:22:17.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.530224 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 01:22:17.530310 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 01:22:17.534683 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:22:17.547707 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 01:22:17.548728 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:22:17.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.552919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 01:22:17.553087 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 01:22:17.555662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 01:22:17.556305 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:22:17.558354 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 01:22:17.559174 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 01:22:17.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.561087 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 01:22:17.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.561168 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 01:22:17.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.562067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 01:22:17.562142 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 01:22:17.569975 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 01:22:17.570758 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 01:22:17.571910 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:22:17.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.573416 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 01:22:17.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.573488 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:22:17.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.576012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 01:22:17.576138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:22:17.580005 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 01:22:17.582969 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 01:22:17.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.593766 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 01:22:17.595060 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 01:22:17.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.630845 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 01:22:17.631117 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 01:22:17.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.633364 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 01:22:17.634361 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 01:22:17.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:17.634467 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 01:22:17.637279 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 01:22:17.661820 systemd[1]: Switching root. Jan 14 01:22:17.701616 systemd-journald[331]: Journal stopped Jan 14 01:22:19.433037 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Jan 14 01:22:19.433162 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 01:22:19.433203 kernel: SELinux: policy capability open_perms=1 Jan 14 01:22:19.433248 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 01:22:19.433294 kernel: SELinux: policy capability always_check_network=0 Jan 14 01:22:19.433316 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 01:22:19.433342 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 01:22:19.433369 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 01:22:19.433400 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 01:22:19.433427 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 01:22:19.433449 systemd[1]: Successfully loaded SELinux policy in 72.994ms. Jan 14 01:22:19.433490 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.424ms. Jan 14 01:22:19.433515 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 01:22:19.433538 systemd[1]: Detected virtualization kvm. Jan 14 01:22:19.433560 systemd[1]: Detected architecture x86-64. Jan 14 01:22:19.433580 systemd[1]: Detected first boot. Jan 14 01:22:19.433602 systemd[1]: Hostname set to . Jan 14 01:22:19.433624 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 01:22:19.433658 zram_generator::config[1148]: No configuration found. Jan 14 01:22:19.433691 kernel: Guest personality initialized and is inactive Jan 14 01:22:19.433713 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 01:22:19.433733 kernel: Initialized host personality Jan 14 01:22:19.433753 kernel: NET: Registered PF_VSOCK protocol family Jan 14 01:22:19.433775 systemd[1]: Populated /etc with preset unit settings. Jan 14 01:22:19.435839 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 01:22:19.435885 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 01:22:19.435909 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 01:22:19.435939 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 01:22:19.435962 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 01:22:19.435984 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 01:22:19.436014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 01:22:19.436048 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 01:22:19.436072 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 01:22:19.436094 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 01:22:19.436117 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 01:22:19.436139 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 01:22:19.436161 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 01:22:19.436183 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 01:22:19.436235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 01:22:19.436262 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 01:22:19.436285 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 01:22:19.436307 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 01:22:19.436329 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 01:22:19.436369 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 01:22:19.436392 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 01:22:19.436414 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 01:22:19.436436 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 01:22:19.436458 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 01:22:19.436480 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 01:22:19.436503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 01:22:19.436536 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 01:22:19.436561 systemd[1]: Reached target slices.target - Slice Units. Jan 14 01:22:19.436584 systemd[1]: Reached target swap.target - Swaps. Jan 14 01:22:19.436606 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 01:22:19.436628 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 01:22:19.436650 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 01:22:19.436672 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 01:22:19.436694 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 01:22:19.436729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 01:22:19.436753 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 01:22:19.436775 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 01:22:19.436829 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 01:22:19.436861 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 01:22:19.436885 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 01:22:19.436908 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 01:22:19.436944 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 01:22:19.436979 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 01:22:19.437002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:19.437024 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 01:22:19.437046 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 01:22:19.437067 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 01:22:19.437089 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 01:22:19.437124 systemd[1]: Reached target machines.target - Containers. Jan 14 01:22:19.437148 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 01:22:19.437170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:22:19.437192 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 01:22:19.437213 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 01:22:19.437247 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:22:19.437282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:22:19.437306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:22:19.437328 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 01:22:19.437349 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:22:19.437370 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 01:22:19.437392 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 01:22:19.437414 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 01:22:19.437449 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 01:22:19.437473 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 01:22:19.437495 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:22:19.437528 kernel: fuse: init (API version 7.41) Jan 14 01:22:19.437551 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 01:22:19.437573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 01:22:19.437594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 01:22:19.437616 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 01:22:19.437637 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 01:22:19.437659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 01:22:19.437682 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:19.437717 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 01:22:19.437742 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 01:22:19.437763 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 01:22:19.437799 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 01:22:19.437824 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 01:22:19.437860 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 01:22:19.437884 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 01:22:19.437905 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 01:22:19.437927 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 01:22:19.437949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:22:19.437971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:22:19.438006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:22:19.438040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:22:19.438064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 01:22:19.438086 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 01:22:19.438108 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 01:22:19.438129 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:22:19.438163 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:22:19.438186 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 01:22:19.438209 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 01:22:19.438241 kernel: ACPI: bus type drm_connector registered Jan 14 01:22:19.438264 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 01:22:19.438299 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:22:19.438323 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:22:19.438352 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 01:22:19.438391 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 01:22:19.438415 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 01:22:19.438485 systemd-journald[1242]: Collecting audit messages is enabled. Jan 14 01:22:19.438527 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 01:22:19.438552 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 01:22:19.438574 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 01:22:19.438609 systemd-journald[1242]: Journal started Jan 14 01:22:19.438644 systemd-journald[1242]: Runtime Journal (/run/log/journal/8c4b0c3776f34b28b4a72441f5a91189) is 4.7M, max 37.7M, 33M free. Jan 14 01:22:19.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.440843 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:22:19.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.239000 audit: BPF prog-id=14 op=UNLOAD Jan 14 01:22:19.239000 audit: BPF prog-id=13 op=UNLOAD Jan 14 01:22:19.250000 audit: BPF prog-id=15 op=LOAD Jan 14 01:22:19.250000 audit: BPF prog-id=16 op=LOAD Jan 14 01:22:19.250000 audit: BPF prog-id=17 op=LOAD Jan 14 01:22:19.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.427000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 01:22:19.427000 audit[1242]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdbef65fc0 a2=4000 a3=0 items=0 ppid=1 pid=1242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:19.427000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 01:22:18.919240 systemd[1]: Queued start job for default target multi-user.target. Jan 14 01:22:19.444818 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:22:18.933347 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 01:22:18.934338 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 01:22:19.451829 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 01:22:19.455810 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:22:19.463821 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 01:22:19.466815 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:22:19.470817 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 01:22:19.481861 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 01:22:19.491345 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 01:22:19.491457 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 01:22:19.501275 kernel: kauditd_printk_skb: 81 callbacks suppressed Jan 14 01:22:19.501361 kernel: audit: type=1130 audit(1768353739.494:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.509134 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 01:22:19.518531 kernel: audit: type=1130 audit(1768353739.509:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.523080 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 01:22:19.526986 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 01:22:19.531097 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 01:22:19.547846 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 01:22:19.566806 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 01:22:19.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.571868 kernel: audit: type=1130 audit(1768353739.566:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.572225 systemd-journald[1242]: Time spent on flushing to /var/log/journal/8c4b0c3776f34b28b4a72441f5a91189 is 103.535ms for 1297 entries. Jan 14 01:22:19.572225 systemd-journald[1242]: System Journal (/var/log/journal/8c4b0c3776f34b28b4a72441f5a91189) is 8M, max 588.1M, 580.1M free. Jan 14 01:22:19.714582 systemd-journald[1242]: Received client request to flush runtime journal. Jan 14 01:22:19.714690 kernel: audit: type=1130 audit(1768353739.609:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.714751 kernel: loop2: detected capacity change from 0 to 111560 Jan 14 01:22:19.714826 kernel: audit: type=1130 audit(1768353739.631:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.714877 kernel: audit: type=1334 audit(1768353739.636:132): prog-id=18 op=LOAD Jan 14 01:22:19.714921 kernel: audit: type=1334 audit(1768353739.637:133): prog-id=19 op=LOAD Jan 14 01:22:19.714964 kernel: audit: type=1334 audit(1768353739.637:134): prog-id=20 op=LOAD Jan 14 01:22:19.715011 kernel: audit: type=1334 audit(1768353739.645:135): prog-id=21 op=LOAD Jan 14 01:22:19.715046 kernel: audit: type=1130 audit(1768353739.687:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.715081 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 01:22:19.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.636000 audit: BPF prog-id=18 op=LOAD Jan 14 01:22:19.637000 audit: BPF prog-id=19 op=LOAD Jan 14 01:22:19.637000 audit: BPF prog-id=20 op=LOAD Jan 14 01:22:19.645000 audit: BPF prog-id=21 op=LOAD Jan 14 01:22:19.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.697000 audit: BPF prog-id=22 op=LOAD Jan 14 01:22:19.697000 audit: BPF prog-id=23 op=LOAD Jan 14 01:22:19.697000 audit: BPF prog-id=24 op=LOAD Jan 14 01:22:19.703000 audit: BPF prog-id=25 op=LOAD Jan 14 01:22:19.703000 audit: BPF prog-id=26 op=LOAD Jan 14 01:22:19.703000 audit: BPF prog-id=27 op=LOAD Jan 14 01:22:19.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.608909 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 01:22:19.631205 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 01:22:19.640702 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 01:22:19.647746 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 01:22:19.653003 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 01:22:19.687397 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 01:22:19.700624 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 01:22:19.706720 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 01:22:19.718533 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 01:22:19.753421 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 14 01:22:19.753992 systemd-tmpfiles[1300]: ACLs are not supported, ignoring. Jan 14 01:22:19.760818 kernel: loop4: detected capacity change from 0 to 8 Jan 14 01:22:19.773969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 01:22:19.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.789814 kernel: loop5: detected capacity change from 0 to 224512 Jan 14 01:22:19.812811 kernel: loop6: detected capacity change from 0 to 111560 Jan 14 01:22:19.827682 systemd-nsresourced[1306]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 01:22:19.830984 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 01:22:19.833816 kernel: loop7: detected capacity change from 0 to 50784 Jan 14 01:22:19.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.849324 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 01:22:19.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:19.856824 kernel: loop1: detected capacity change from 0 to 8 Jan 14 01:22:19.857917 (sd-merge)[1312]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 14 01:22:19.863250 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 14 01:22:19.868579 systemd[1]: Reload requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 01:22:19.868604 systemd[1]: Reloading... Jan 14 01:22:20.024853 zram_generator::config[1362]: No configuration found. Jan 14 01:22:20.033092 systemd-oomd[1298]: No swap; memory pressure usage will be degraded Jan 14 01:22:20.102919 systemd-resolved[1299]: Positive Trust Anchors: Jan 14 01:22:20.102955 systemd-resolved[1299]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 01:22:20.102964 systemd-resolved[1299]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 01:22:20.103009 systemd-resolved[1299]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 01:22:20.133254 systemd-resolved[1299]: Using system hostname 'srv-aufav.gb1.brightbox.com'. Jan 14 01:22:20.368216 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 01:22:20.368506 systemd[1]: Reloading finished in 499 ms. Jan 14 01:22:20.391015 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 01:22:20.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.392219 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 01:22:20.393399 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 01:22:20.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.398869 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 01:22:20.401680 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 01:22:20.408131 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 01:22:20.422654 systemd[1]: Starting ensure-sysext.service... Jan 14 01:22:20.430022 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 01:22:20.434000 audit: BPF prog-id=28 op=LOAD Jan 14 01:22:20.434000 audit: BPF prog-id=15 op=UNLOAD Jan 14 01:22:20.434000 audit: BPF prog-id=29 op=LOAD Jan 14 01:22:20.434000 audit: BPF prog-id=30 op=LOAD Jan 14 01:22:20.434000 audit: BPF prog-id=16 op=UNLOAD Jan 14 01:22:20.434000 audit: BPF prog-id=17 op=UNLOAD Jan 14 01:22:20.435000 audit: BPF prog-id=31 op=LOAD Jan 14 01:22:20.435000 audit: BPF prog-id=25 op=UNLOAD Jan 14 01:22:20.437000 audit: BPF prog-id=32 op=LOAD Jan 14 01:22:20.437000 audit: BPF prog-id=33 op=LOAD Jan 14 01:22:20.437000 audit: BPF prog-id=26 op=UNLOAD Jan 14 01:22:20.437000 audit: BPF prog-id=27 op=UNLOAD Jan 14 01:22:20.441000 audit: BPF prog-id=34 op=LOAD Jan 14 01:22:20.441000 audit: BPF prog-id=18 op=UNLOAD Jan 14 01:22:20.441000 audit: BPF prog-id=35 op=LOAD Jan 14 01:22:20.441000 audit: BPF prog-id=36 op=LOAD Jan 14 01:22:20.441000 audit: BPF prog-id=19 op=UNLOAD Jan 14 01:22:20.441000 audit: BPF prog-id=20 op=UNLOAD Jan 14 01:22:20.444000 audit: BPF prog-id=37 op=LOAD Jan 14 01:22:20.444000 audit: BPF prog-id=21 op=UNLOAD Jan 14 01:22:20.447000 audit: BPF prog-id=38 op=LOAD Jan 14 01:22:20.447000 audit: BPF prog-id=22 op=UNLOAD Jan 14 01:22:20.447000 audit: BPF prog-id=39 op=LOAD Jan 14 01:22:20.447000 audit: BPF prog-id=40 op=LOAD Jan 14 01:22:20.447000 audit: BPF prog-id=23 op=UNLOAD Jan 14 01:22:20.447000 audit: BPF prog-id=24 op=UNLOAD Jan 14 01:22:20.452640 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 01:22:20.454353 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 01:22:20.477007 systemd[1]: Reload requested from client PID 1411 ('systemctl') (unit ensure-sysext.service)... Jan 14 01:22:20.477042 systemd[1]: Reloading... Jan 14 01:22:20.504450 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 01:22:20.504508 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 01:22:20.511108 systemd-tmpfiles[1412]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 01:22:20.518281 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 14 01:22:20.518398 systemd-tmpfiles[1412]: ACLs are not supported, ignoring. Jan 14 01:22:20.530909 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:22:20.530929 systemd-tmpfiles[1412]: Skipping /boot Jan 14 01:22:20.553013 systemd-tmpfiles[1412]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 01:22:20.553034 systemd-tmpfiles[1412]: Skipping /boot Jan 14 01:22:20.607831 zram_generator::config[1442]: No configuration found. Jan 14 01:22:20.904977 systemd[1]: Reloading finished in 427 ms. Jan 14 01:22:20.930725 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 01:22:20.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.935000 audit: BPF prog-id=41 op=LOAD Jan 14 01:22:20.935000 audit: BPF prog-id=31 op=UNLOAD Jan 14 01:22:20.935000 audit: BPF prog-id=42 op=LOAD Jan 14 01:22:20.936000 audit: BPF prog-id=43 op=LOAD Jan 14 01:22:20.936000 audit: BPF prog-id=32 op=UNLOAD Jan 14 01:22:20.936000 audit: BPF prog-id=33 op=UNLOAD Jan 14 01:22:20.937000 audit: BPF prog-id=44 op=LOAD Jan 14 01:22:20.937000 audit: BPF prog-id=37 op=UNLOAD Jan 14 01:22:20.938000 audit: BPF prog-id=45 op=LOAD Jan 14 01:22:20.938000 audit: BPF prog-id=28 op=UNLOAD Jan 14 01:22:20.939000 audit: BPF prog-id=46 op=LOAD Jan 14 01:22:20.939000 audit: BPF prog-id=47 op=LOAD Jan 14 01:22:20.939000 audit: BPF prog-id=29 op=UNLOAD Jan 14 01:22:20.939000 audit: BPF prog-id=30 op=UNLOAD Jan 14 01:22:20.939000 audit: BPF prog-id=48 op=LOAD Jan 14 01:22:20.939000 audit: BPF prog-id=38 op=UNLOAD Jan 14 01:22:20.939000 audit: BPF prog-id=49 op=LOAD Jan 14 01:22:20.940000 audit: BPF prog-id=50 op=LOAD Jan 14 01:22:20.940000 audit: BPF prog-id=39 op=UNLOAD Jan 14 01:22:20.940000 audit: BPF prog-id=40 op=UNLOAD Jan 14 01:22:20.941000 audit: BPF prog-id=51 op=LOAD Jan 14 01:22:20.941000 audit: BPF prog-id=34 op=UNLOAD Jan 14 01:22:20.941000 audit: BPF prog-id=52 op=LOAD Jan 14 01:22:20.941000 audit: BPF prog-id=53 op=LOAD Jan 14 01:22:20.941000 audit: BPF prog-id=35 op=UNLOAD Jan 14 01:22:20.941000 audit: BPF prog-id=36 op=UNLOAD Jan 14 01:22:20.953577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 01:22:20.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:20.966923 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:22:20.969247 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 01:22:20.977433 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 01:22:20.983322 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 01:22:20.984000 audit: BPF prog-id=54 op=LOAD Jan 14 01:22:20.984000 audit: BPF prog-id=55 op=LOAD Jan 14 01:22:20.985000 audit: BPF prog-id=7 op=UNLOAD Jan 14 01:22:20.985000 audit: BPF prog-id=8 op=UNLOAD Jan 14 01:22:20.988534 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 01:22:20.992352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 01:22:20.998798 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:20.999086 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:22:21.002212 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 01:22:21.011740 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 01:22:21.021287 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 01:22:21.023186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:22:21.023497 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:22:21.023679 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:22:21.023935 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:21.031414 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:21.031700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:22:21.033054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:22:21.033330 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:22:21.033485 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:22:21.033629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:21.058347 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:21.058690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 01:22:21.066544 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 01:22:21.070944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 01:22:21.072032 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 01:22:21.072206 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 01:22:21.072402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 01:22:21.074612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 01:22:21.080767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 01:22:21.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.092235 systemd[1]: Finished ensure-sysext.service. Jan 14 01:22:21.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.105000 audit: BPF prog-id=56 op=LOAD Jan 14 01:22:21.109963 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 01:22:21.113468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 01:22:21.113000 audit[1510]: SYSTEM_BOOT pid=1510 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.115229 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 01:22:21.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.116920 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 01:22:21.117738 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 01:22:21.120036 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 01:22:21.120867 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 01:22:21.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.134251 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 01:22:21.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.139545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 01:22:21.139775 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 01:22:21.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.145852 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 01:22:21.173335 systemd-udevd[1508]: Using default interface naming scheme 'v257'. Jan 14 01:22:21.224096 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 01:22:21.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:21.226909 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 01:22:21.231000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 01:22:21.231000 audit[1544]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefe731670 a2=420 a3=0 items=0 ppid=1504 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:21.231000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:22:21.232298 augenrules[1544]: No rules Jan 14 01:22:21.236674 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:22:21.237129 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:22:21.252586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 01:22:21.254628 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 01:22:21.264540 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 01:22:21.270027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 01:22:21.363585 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 01:22:21.497998 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 01:22:21.502692 systemd-networkd[1555]: lo: Link UP Jan 14 01:22:21.502706 systemd-networkd[1555]: lo: Gained carrier Jan 14 01:22:21.507057 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 01:22:21.508146 systemd[1]: Reached target network.target - Network. Jan 14 01:22:21.512599 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 01:22:21.520354 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 01:22:21.565924 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 01:22:21.621094 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 01:22:21.640922 kernel: ACPI: button: Power Button [PWRF] Jan 14 01:22:21.701685 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:22:21.701700 systemd-networkd[1555]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 01:22:21.705126 systemd-networkd[1555]: eth0: Link UP Jan 14 01:22:21.705417 systemd-networkd[1555]: eth0: Gained carrier Jan 14 01:22:21.705439 systemd-networkd[1555]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 01:22:21.734879 systemd-networkd[1555]: eth0: DHCPv4 address 10.230.32.214/30, gateway 10.230.32.213 acquired from 10.230.32.213 Jan 14 01:22:21.735909 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:21.770821 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 01:22:21.782120 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 01:22:21.816815 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 01:22:21.823830 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 01:22:21.868898 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 01:22:21.931636 ldconfig[1506]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 01:22:21.940297 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 01:22:21.947111 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 01:22:21.972225 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 01:22:21.974507 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 01:22:21.975503 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 01:22:21.976943 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 01:22:21.978479 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 01:22:21.980182 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 01:22:21.982004 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 01:22:21.983944 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 01:22:21.985983 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 01:22:21.986703 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 01:22:21.987957 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 01:22:21.988121 systemd[1]: Reached target paths.target - Path Units. Jan 14 01:22:21.999248 systemd[1]: Reached target timers.target - Timer Units. Jan 14 01:22:22.001624 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 01:22:22.006286 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 01:22:22.015033 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 01:22:22.016837 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 01:22:22.019197 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 01:22:22.030582 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 01:22:22.031760 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 01:22:22.034298 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 01:22:22.043718 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 01:22:22.047097 systemd[1]: Reached target basic.target - Basic System. Jan 14 01:22:22.050051 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:22:22.050238 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 01:22:22.053113 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 01:22:22.057206 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 01:22:22.065239 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 01:22:22.071186 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 01:22:22.079162 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 01:22:22.083522 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 01:22:22.085228 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 01:22:22.089170 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 01:22:22.100595 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 01:22:22.109880 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing passwd entry cache Jan 14 01:22:22.110185 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 01:22:22.110379 oslogin_cache_refresh[1610]: Refreshing passwd entry cache Jan 14 01:22:22.119194 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 01:22:22.129876 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting users, quitting Jan 14 01:22:22.129876 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:22:22.129876 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Refreshing group entry cache Jan 14 01:22:22.129181 oslogin_cache_refresh[1610]: Failure getting users, quitting Jan 14 01:22:22.129217 oslogin_cache_refresh[1610]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 01:22:22.129302 oslogin_cache_refresh[1610]: Refreshing group entry cache Jan 14 01:22:22.130668 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Failure getting groups, quitting Jan 14 01:22:22.130668 google_oslogin_nss_cache[1610]: oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:22:22.130597 oslogin_cache_refresh[1610]: Failure getting groups, quitting Jan 14 01:22:22.130612 oslogin_cache_refresh[1610]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 01:22:22.132895 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 01:22:22.151827 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:22.152249 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 01:22:22.153874 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 01:22:22.155757 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 01:22:22.158215 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 01:22:22.166203 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 01:22:22.190915 jq[1608]: false Jan 14 01:22:22.191889 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 01:22:22.193483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 01:22:22.194860 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 01:22:22.195509 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 01:22:22.195886 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 01:22:22.200355 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 01:22:22.200715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 01:22:22.204239 extend-filesystems[1609]: Found /dev/vda6 Jan 14 01:22:22.215000 update_engine[1621]: I20260114 01:22:22.213658 1621 main.cc:92] Flatcar Update Engine starting Jan 14 01:22:22.236934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 01:22:22.242948 extend-filesystems[1609]: Found /dev/vda9 Jan 14 01:22:22.260112 extend-filesystems[1609]: Checking size of /dev/vda9 Jan 14 01:22:22.263133 jq[1622]: true Jan 14 01:22:22.290700 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 01:22:22.292031 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 01:22:22.318913 tar[1625]: linux-amd64/LICENSE Jan 14 01:22:22.322872 tar[1625]: linux-amd64/helm Jan 14 01:22:22.340365 extend-filesystems[1609]: Resized partition /dev/vda9 Jan 14 01:22:22.351592 extend-filesystems[1660]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 01:22:22.354958 jq[1647]: true Jan 14 01:22:22.363818 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 14 01:22:22.363611 dbus-daemon[1606]: [system] SELinux support is enabled Jan 14 01:22:22.364987 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 01:22:22.370973 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 01:22:22.371038 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 01:22:22.372896 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 01:22:22.372924 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 01:22:22.427581 dbus-daemon[1606]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1555 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 01:22:22.427745 systemd[1]: Started update-engine.service - Update Engine. Jan 14 01:22:22.428181 update_engine[1621]: I20260114 01:22:22.428102 1621 update_check_scheduler.cc:74] Next update check in 2m11s Jan 14 01:22:22.443692 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 01:22:22.521155 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 01:22:22.619487 bash[1679]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:22:22.620883 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 01:22:22.685542 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 01:22:22.711206 systemd[1]: Starting sshkeys.service... Jan 14 01:22:22.721529 systemd-logind[1619]: Watching system buttons on /dev/input/event3 (Power Button) Jan 14 01:22:22.723859 systemd-logind[1619]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 01:22:22.724362 systemd-logind[1619]: New seat seat0. Jan 14 01:22:22.727280 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 01:22:22.798159 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 14 01:22:22.818016 extend-filesystems[1660]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 01:22:22.818016 extend-filesystems[1660]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 14 01:22:22.818016 extend-filesystems[1660]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 14 01:22:22.831638 extend-filesystems[1609]: Resized filesystem in /dev/vda9 Jan 14 01:22:22.819062 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 01:22:22.820877 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 01:22:22.831163 locksmithd[1666]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 01:22:22.848906 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 01:22:22.851669 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 01:22:22.909225 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:22.998798 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 01:22:23.001638 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 01:22:23.003834 dbus-daemon[1606]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1664 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 01:22:23.012712 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 01:22:23.027739 containerd[1645]: time="2026-01-14T01:22:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 01:22:23.037091 containerd[1645]: time="2026-01-14T01:22:23.035320487Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.090855674Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="22.56µs" Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.090923874Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091000853Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091035719Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091301055Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091327615Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091450208Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091473167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091725844Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091750484Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:22:23.092409 containerd[1645]: time="2026-01-14T01:22:23.091769331Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 01:22:23.093894 containerd[1645]: time="2026-01-14T01:22:23.093759353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.094864 containerd[1645]: time="2026-01-14T01:22:23.094828578Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.094962 containerd[1645]: time="2026-01-14T01:22:23.094938288Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 01:22:23.095229 containerd[1645]: time="2026-01-14T01:22:23.095199807Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.096671 containerd[1645]: time="2026-01-14T01:22:23.096496471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.096671 containerd[1645]: time="2026-01-14T01:22:23.096552810Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 01:22:23.096671 containerd[1645]: time="2026-01-14T01:22:23.096572700Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 01:22:23.097874 containerd[1645]: time="2026-01-14T01:22:23.097844492Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 01:22:23.099884 containerd[1645]: time="2026-01-14T01:22:23.099852814Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 01:22:23.100068 containerd[1645]: time="2026-01-14T01:22:23.100041268Z" level=info msg="metadata content store policy set" policy=shared Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115649186Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115753903Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115906724Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115944564Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115966620Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.115993569Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116014259Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116031355Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116070540Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116096023Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116113940Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116147364Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116166242Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 01:22:23.116827 containerd[1645]: time="2026-01-14T01:22:23.116185541Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116364089Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116408053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116460373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116481866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116500281Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116516939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116536172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116564463Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116584977Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116611972Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116632103Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 01:22:23.117326 containerd[1645]: time="2026-01-14T01:22:23.116672284Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 01:22:23.121299 containerd[1645]: time="2026-01-14T01:22:23.116770838Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 01:22:23.121299 containerd[1645]: time="2026-01-14T01:22:23.118490965Z" level=info msg="Start snapshots syncer" Jan 14 01:22:23.121299 containerd[1645]: time="2026-01-14T01:22:23.118541850Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 01:22:23.121989 containerd[1645]: time="2026-01-14T01:22:23.121928772Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122285489Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122400077Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122574269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122605447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122623501Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122641360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122673799Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122694536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122730497Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122751942Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 01:22:23.122809 containerd[1645]: time="2026-01-14T01:22:23.122770386Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123277231Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123311679Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123327859Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123343466Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123360250Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123377047Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123393551Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123433742Z" level=info msg="runtime interface created" Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123446551Z" level=info msg="created NRI interface" Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123460107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123479407Z" level=info msg="Connect containerd service" Jan 14 01:22:23.124415 containerd[1645]: time="2026-01-14T01:22:23.123546895Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 01:22:23.129421 containerd[1645]: time="2026-01-14T01:22:23.128190578Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 01:22:23.221003 polkitd[1698]: Started polkitd version 126 Jan 14 01:22:23.232057 polkitd[1698]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 01:22:23.232519 polkitd[1698]: Loading rules from directory /run/polkit-1/rules.d Jan 14 01:22:23.232606 polkitd[1698]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:22:23.232994 polkitd[1698]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 01:22:23.233032 polkitd[1698]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 01:22:23.233108 polkitd[1698]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 01:22:23.236075 polkitd[1698]: Finished loading, compiling and executing 2 rules Jan 14 01:22:23.238060 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 01:22:23.245977 dbus-daemon[1606]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 01:22:23.246464 polkitd[1698]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 01:22:23.284236 systemd-hostnamed[1664]: Hostname set to (static) Jan 14 01:22:23.337374 containerd[1645]: time="2026-01-14T01:22:23.337323018Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 01:22:23.337733 containerd[1645]: time="2026-01-14T01:22:23.337628603Z" level=info msg="Start subscribing containerd event" Jan 14 01:22:23.337902 containerd[1645]: time="2026-01-14T01:22:23.337762834Z" level=info msg="Start recovering state" Jan 14 01:22:23.337991 containerd[1645]: time="2026-01-14T01:22:23.337966981Z" level=info msg="Start event monitor" Jan 14 01:22:23.338055 containerd[1645]: time="2026-01-14T01:22:23.337994556Z" level=info msg="Start cni network conf syncer for default" Jan 14 01:22:23.338055 containerd[1645]: time="2026-01-14T01:22:23.338008208Z" level=info msg="Start streaming server" Jan 14 01:22:23.338055 containerd[1645]: time="2026-01-14T01:22:23.338033026Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 01:22:23.338055 containerd[1645]: time="2026-01-14T01:22:23.338047234Z" level=info msg="runtime interface starting up..." Jan 14 01:22:23.338200 containerd[1645]: time="2026-01-14T01:22:23.338060816Z" level=info msg="starting plugins..." Jan 14 01:22:23.338200 containerd[1645]: time="2026-01-14T01:22:23.338091639Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 01:22:23.338538 containerd[1645]: time="2026-01-14T01:22:23.338325863Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 01:22:23.339721 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 01:22:23.340682 containerd[1645]: time="2026-01-14T01:22:23.340647913Z" level=info msg="containerd successfully booted in 0.313481s" Jan 14 01:22:23.452330 sshd_keygen[1643]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 01:22:23.485939 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 01:22:23.493303 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 01:22:23.495523 tar[1625]: linux-amd64/README.md Jan 14 01:22:23.516941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 01:22:23.521027 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 01:22:23.521407 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 01:22:23.525597 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 01:22:23.558938 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 01:22:23.564488 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 01:22:23.569055 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 01:22:23.570241 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 01:22:23.603988 systemd-networkd[1555]: eth0: Gained IPv6LL Jan 14 01:22:23.604833 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:23.607489 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 01:22:23.609331 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 01:22:23.612769 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:22:23.617201 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 01:22:23.652414 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 01:22:23.946999 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:23.968824 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:24.672457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:22:24.685319 (kubelet)[1759]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:22:25.110057 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:25.111888 systemd-networkd[1555]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8835:24:19ff:fee6:20d6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8835:24:19ff:fee6:20d6/64 assigned by NDisc. Jan 14 01:22:25.111898 systemd-networkd[1555]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 01:22:25.311197 kubelet[1759]: E0114 01:22:25.311077 1759 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:22:25.314240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:22:25.314488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:22:25.315438 systemd[1]: kubelet.service: Consumed 1.107s CPU time, 263.9M memory peak. Jan 14 01:22:25.964846 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:25.981815 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:27.116955 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:27.125650 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:27.312206 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 01:22:27.315226 systemd[1]: Started sshd@0-10.230.32.214:22-68.220.241.50:51276.service - OpenSSH per-connection server daemon (68.220.241.50:51276). Jan 14 01:22:27.866851 sshd[1770]: Accepted publickey for core from 68.220.241.50 port 51276 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:27.869509 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:27.883776 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 01:22:27.886707 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 01:22:27.898482 systemd-logind[1619]: New session 1 of user core. Jan 14 01:22:27.931669 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 01:22:27.938081 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 01:22:27.961011 (systemd)[1776]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:27.966290 systemd-logind[1619]: New session 2 of user core. Jan 14 01:22:28.154171 systemd[1776]: Queued start job for default target default.target. Jan 14 01:22:28.163311 systemd[1776]: Created slice app.slice - User Application Slice. Jan 14 01:22:28.163377 systemd[1776]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 01:22:28.163415 systemd[1776]: Reached target paths.target - Paths. Jan 14 01:22:28.163508 systemd[1776]: Reached target timers.target - Timers. Jan 14 01:22:28.165630 systemd[1776]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 01:22:28.168002 systemd[1776]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 01:22:28.193173 systemd[1776]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 01:22:28.193532 systemd[1776]: Reached target sockets.target - Sockets. Jan 14 01:22:28.195364 systemd[1776]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 01:22:28.195644 systemd[1776]: Reached target basic.target - Basic System. Jan 14 01:22:28.195871 systemd[1776]: Reached target default.target - Main User Target. Jan 14 01:22:28.196061 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 01:22:28.196186 systemd[1776]: Startup finished in 220ms. Jan 14 01:22:28.208276 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 01:22:28.504321 systemd[1]: Started sshd@1-10.230.32.214:22-68.220.241.50:51278.service - OpenSSH per-connection server daemon (68.220.241.50:51278). Jan 14 01:22:28.662992 login[1738]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:28.672381 systemd-logind[1619]: New session 3 of user core. Jan 14 01:22:28.681142 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 01:22:28.985002 login[1739]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:28.994077 systemd-logind[1619]: New session 4 of user core. Jan 14 01:22:29.004420 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 01:22:29.026178 sshd[1790]: Accepted publickey for core from 68.220.241.50 port 51278 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:29.028808 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:29.036274 systemd-logind[1619]: New session 5 of user core. Jan 14 01:22:29.044236 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:22:29.303572 sshd[1821]: Connection closed by 68.220.241.50 port 51278 Jan 14 01:22:29.304696 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:29.312927 systemd[1]: sshd@1-10.230.32.214:22-68.220.241.50:51278.service: Deactivated successfully. Jan 14 01:22:29.316186 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:22:29.317715 systemd-logind[1619]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:22:29.319994 systemd-logind[1619]: Removed session 5. Jan 14 01:22:29.409569 systemd[1]: Started sshd@2-10.230.32.214:22-68.220.241.50:51280.service - OpenSSH per-connection server daemon (68.220.241.50:51280). Jan 14 01:22:29.940189 sshd[1827]: Accepted publickey for core from 68.220.241.50 port 51280 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:29.941943 sshd-session[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:29.949321 systemd-logind[1619]: New session 6 of user core. Jan 14 01:22:29.970585 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:22:29.984808 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:29.993820 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 01:22:30.001424 coreos-metadata[1605]: Jan 14 01:22:30.001 WARN failed to locate config-drive, using the metadata service API instead Jan 14 01:22:30.005071 coreos-metadata[1695]: Jan 14 01:22:30.005 WARN failed to locate config-drive, using the metadata service API instead Jan 14 01:22:30.028886 coreos-metadata[1605]: Jan 14 01:22:30.028 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 01:22:30.030313 coreos-metadata[1695]: Jan 14 01:22:30.030 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 01:22:30.034586 coreos-metadata[1605]: Jan 14 01:22:30.034 INFO Fetch failed with 404: resource not found Jan 14 01:22:30.034586 coreos-metadata[1605]: Jan 14 01:22:30.034 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 01:22:30.035193 coreos-metadata[1605]: Jan 14 01:22:30.035 INFO Fetch successful Jan 14 01:22:30.035384 coreos-metadata[1605]: Jan 14 01:22:30.035 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 01:22:30.049661 coreos-metadata[1605]: Jan 14 01:22:30.049 INFO Fetch successful Jan 14 01:22:30.049661 coreos-metadata[1605]: Jan 14 01:22:30.049 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 01:22:30.056125 coreos-metadata[1695]: Jan 14 01:22:30.056 INFO Fetch successful Jan 14 01:22:30.056449 coreos-metadata[1695]: Jan 14 01:22:30.056 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 01:22:30.061512 coreos-metadata[1605]: Jan 14 01:22:30.061 INFO Fetch successful Jan 14 01:22:30.061512 coreos-metadata[1605]: Jan 14 01:22:30.061 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 01:22:30.072971 coreos-metadata[1605]: Jan 14 01:22:30.072 INFO Fetch successful Jan 14 01:22:30.073125 coreos-metadata[1605]: Jan 14 01:22:30.073 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 01:22:30.081462 coreos-metadata[1695]: Jan 14 01:22:30.081 INFO Fetch successful Jan 14 01:22:30.083729 unknown[1695]: wrote ssh authorized keys file for user: core Jan 14 01:22:30.086816 coreos-metadata[1605]: Jan 14 01:22:30.086 INFO Fetch successful Jan 14 01:22:30.113360 update-ssh-keys[1837]: Updated "/home/core/.ssh/authorized_keys" Jan 14 01:22:30.113227 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 01:22:30.116366 systemd[1]: Finished sshkeys.service. Jan 14 01:22:30.127724 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 01:22:30.129649 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 01:22:30.130365 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 01:22:30.130768 systemd[1]: Startup finished in 3.393s (kernel) + 14.144s (initrd) + 12.194s (userspace) = 29.732s. Jan 14 01:22:30.222169 sshd[1831]: Connection closed by 68.220.241.50 port 51280 Jan 14 01:22:30.223340 sshd-session[1827]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:30.228914 systemd[1]: sshd@2-10.230.32.214:22-68.220.241.50:51280.service: Deactivated successfully. Jan 14 01:22:30.231535 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:22:30.234667 systemd-logind[1619]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:22:30.236143 systemd-logind[1619]: Removed session 6. Jan 14 01:22:35.565298 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 01:22:35.567908 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:22:35.779073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:22:35.796605 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:22:35.881016 kubelet[1856]: E0114 01:22:35.880847 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:22:35.886072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:22:35.886340 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:22:35.887290 systemd[1]: kubelet.service: Consumed 238ms CPU time, 108.2M memory peak. Jan 14 01:22:40.336300 systemd[1]: Started sshd@3-10.230.32.214:22-68.220.241.50:41492.service - OpenSSH per-connection server daemon (68.220.241.50:41492). Jan 14 01:22:40.872719 sshd[1865]: Accepted publickey for core from 68.220.241.50 port 41492 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:40.874629 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:40.881505 systemd-logind[1619]: New session 7 of user core. Jan 14 01:22:40.890040 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:22:41.158592 sshd[1869]: Connection closed by 68.220.241.50 port 41492 Jan 14 01:22:41.159897 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:41.167427 systemd[1]: sshd@3-10.230.32.214:22-68.220.241.50:41492.service: Deactivated successfully. Jan 14 01:22:41.170656 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:22:41.173217 systemd-logind[1619]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:22:41.174721 systemd-logind[1619]: Removed session 7. Jan 14 01:22:41.258525 systemd[1]: Started sshd@4-10.230.32.214:22-68.220.241.50:41498.service - OpenSSH per-connection server daemon (68.220.241.50:41498). Jan 14 01:22:41.773172 sshd[1875]: Accepted publickey for core from 68.220.241.50 port 41498 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:41.775349 sshd-session[1875]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:41.784374 systemd-logind[1619]: New session 8 of user core. Jan 14 01:22:41.791055 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:22:42.045533 sshd[1879]: Connection closed by 68.220.241.50 port 41498 Jan 14 01:22:42.044170 sshd-session[1875]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:42.050440 systemd-logind[1619]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:22:42.051109 systemd[1]: sshd@4-10.230.32.214:22-68.220.241.50:41498.service: Deactivated successfully. Jan 14 01:22:42.053854 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:22:42.057559 systemd-logind[1619]: Removed session 8. Jan 14 01:22:42.149431 systemd[1]: Started sshd@5-10.230.32.214:22-68.220.241.50:41508.service - OpenSSH per-connection server daemon (68.220.241.50:41508). Jan 14 01:22:42.679878 sshd[1885]: Accepted publickey for core from 68.220.241.50 port 41508 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:42.682629 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:42.691848 systemd-logind[1619]: New session 9 of user core. Jan 14 01:22:42.698041 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:22:42.957140 sshd[1889]: Connection closed by 68.220.241.50 port 41508 Jan 14 01:22:42.957434 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:42.966361 systemd[1]: sshd@5-10.230.32.214:22-68.220.241.50:41508.service: Deactivated successfully. Jan 14 01:22:42.969323 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:22:42.970847 systemd-logind[1619]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:22:42.973343 systemd-logind[1619]: Removed session 9. Jan 14 01:22:43.058455 systemd[1]: Started sshd@6-10.230.32.214:22-68.220.241.50:35428.service - OpenSSH per-connection server daemon (68.220.241.50:35428). Jan 14 01:22:43.580837 sshd[1895]: Accepted publickey for core from 68.220.241.50 port 35428 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:43.582689 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:43.592153 systemd-logind[1619]: New session 10 of user core. Jan 14 01:22:43.608192 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:22:43.785720 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:22:43.786954 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:22:43.797807 sudo[1900]: pam_unix(sudo:session): session closed for user root Jan 14 01:22:43.887635 sshd[1899]: Connection closed by 68.220.241.50 port 35428 Jan 14 01:22:43.889312 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:43.896719 systemd[1]: sshd@6-10.230.32.214:22-68.220.241.50:35428.service: Deactivated successfully. Jan 14 01:22:43.901509 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:22:43.903727 systemd-logind[1619]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:22:43.906445 systemd-logind[1619]: Removed session 10. Jan 14 01:22:44.003370 systemd[1]: Started sshd@7-10.230.32.214:22-68.220.241.50:35440.service - OpenSSH per-connection server daemon (68.220.241.50:35440). Jan 14 01:22:44.527424 sshd[1907]: Accepted publickey for core from 68.220.241.50 port 35440 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:44.529690 sshd-session[1907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:44.538387 systemd-logind[1619]: New session 11 of user core. Jan 14 01:22:44.546092 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:22:44.717623 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:22:44.718198 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:22:44.721703 sudo[1913]: pam_unix(sudo:session): session closed for user root Jan 14 01:22:44.731376 sudo[1912]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:22:44.731914 sudo[1912]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:22:44.744399 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:22:44.807931 kernel: kauditd_printk_skb: 88 callbacks suppressed Jan 14 01:22:44.808256 kernel: audit: type=1305 audit(1768353764.803:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:22:44.803000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:22:44.808614 augenrules[1937]: No rules Jan 14 01:22:44.803000 audit[1937]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec6bae3c0 a2=420 a3=0 items=0 ppid=1918 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:44.813647 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:22:44.814429 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:22:44.816904 kernel: audit: type=1300 audit(1768353764.803:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffec6bae3c0 a2=420 a3=0 items=0 ppid=1918 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:44.818284 kernel: audit: type=1327 audit(1768353764.803:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:22:44.803000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:22:44.819186 sudo[1912]: pam_unix(sudo:session): session closed for user root Jan 14 01:22:44.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.821698 kernel: audit: type=1130 audit(1768353764.813:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.825609 kernel: audit: type=1131 audit(1768353764.813:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.818000 audit[1912]: USER_END pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.829562 kernel: audit: type=1106 audit(1768353764.818:226): pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.818000 audit[1912]: CRED_DISP pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.833738 kernel: audit: type=1104 audit(1768353764.818:227): pid=1912 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.909914 sshd[1911]: Connection closed by 68.220.241.50 port 35440 Jan 14 01:22:44.909686 sshd-session[1907]: pam_unix(sshd:session): session closed for user core Jan 14 01:22:44.914000 audit[1907]: USER_END pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:44.921500 systemd[1]: sshd@7-10.230.32.214:22-68.220.241.50:35440.service: Deactivated successfully. Jan 14 01:22:44.914000 audit[1907]: CRED_DISP pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:44.925803 kernel: audit: type=1106 audit(1768353764.914:228): pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:44.925907 kernel: audit: type=1104 audit(1768353764.914:229): pid=1907 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:44.925602 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:22:44.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.32.214:22-68.220.241.50:35440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.929997 kernel: audit: type=1131 audit(1768353764.918:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.32.214:22-68.220.241.50:35440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:44.928844 systemd-logind[1619]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:22:44.932757 systemd-logind[1619]: Removed session 11. Jan 14 01:22:45.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.32.214:22-68.220.241.50:35452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.020127 systemd[1]: Started sshd@8-10.230.32.214:22-68.220.241.50:35452.service - OpenSSH per-connection server daemon (68.220.241.50:35452). Jan 14 01:22:45.544000 audit[1946]: USER_ACCT pid=1946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:45.546062 sshd[1946]: Accepted publickey for core from 68.220.241.50 port 35452 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:22:45.546000 audit[1946]: CRED_ACQ pid=1946 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:45.546000 audit[1946]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc25034450 a2=3 a3=0 items=0 ppid=1 pid=1946 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:45.546000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:22:45.548935 sshd-session[1946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:22:45.559083 systemd-logind[1619]: New session 12 of user core. Jan 14 01:22:45.569076 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:22:45.573000 audit[1946]: USER_START pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:45.577000 audit[1950]: CRED_ACQ pid=1950 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:22:45.740000 audit[1951]: USER_ACCT pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.740000 audit[1951]: CRED_REFR pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.740000 audit[1951]: USER_START pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:22:45.741101 sudo[1951]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:22:45.741646 sudo[1951]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:22:46.134394 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 01:22:46.139883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:22:46.425058 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:22:46.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:46.439356 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:22:46.499133 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:22:46.513883 (dockerd)[1984]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:22:46.522181 kubelet[1975]: E0114 01:22:46.522084 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:22:46.525831 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:22:46.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:22:46.526084 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:22:46.527063 systemd[1]: kubelet.service: Consumed 246ms CPU time, 110.2M memory peak. Jan 14 01:22:46.952485 dockerd[1984]: time="2026-01-14T01:22:46.952365873Z" level=info msg="Starting up" Jan 14 01:22:46.954952 dockerd[1984]: time="2026-01-14T01:22:46.954905671Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:22:46.975812 dockerd[1984]: time="2026-01-14T01:22:46.975717148Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:22:47.001028 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport665312877-merged.mount: Deactivated successfully. Jan 14 01:22:47.013369 systemd[1]: var-lib-docker-metacopy\x2dcheck3811478829-merged.mount: Deactivated successfully. Jan 14 01:22:47.038811 dockerd[1984]: time="2026-01-14T01:22:47.038611226Z" level=info msg="Loading containers: start." Jan 14 01:22:47.057866 kernel: Initializing XFRM netlink socket Jan 14 01:22:47.146000 audit[2037]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.146000 audit[2037]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcab71f9b0 a2=0 a3=0 items=0 ppid=1984 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:22:47.149000 audit[2039]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.149000 audit[2039]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff88b35b90 a2=0 a3=0 items=0 ppid=1984 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:22:47.153000 audit[2041]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.153000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcce881600 a2=0 a3=0 items=0 ppid=1984 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.153000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:22:47.156000 audit[2043]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.156000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9d9ca400 a2=0 a3=0 items=0 ppid=1984 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:22:47.159000 audit[2045]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.159000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa35b0110 a2=0 a3=0 items=0 ppid=1984 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:22:47.162000 audit[2047]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.162000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd21d22380 a2=0 a3=0 items=0 ppid=1984 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:22:47.165000 audit[2049]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.165000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc4c057de0 a2=0 a3=0 items=0 ppid=1984 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.165000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:22:47.169000 audit[2051]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.169000 audit[2051]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffcc7d2cea0 a2=0 a3=0 items=0 ppid=1984 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:22:47.213000 audit[2054]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.213000 audit[2054]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc2a140800 a2=0 a3=0 items=0 ppid=1984 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.213000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:22:47.216000 audit[2056]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.216000 audit[2056]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffca9603980 a2=0 a3=0 items=0 ppid=1984 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:22:47.220000 audit[2058]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.220000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffff2558660 a2=0 a3=0 items=0 ppid=1984 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.220000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:22:47.223000 audit[2060]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.223000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffce250f780 a2=0 a3=0 items=0 ppid=1984 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:22:47.226000 audit[2062]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.226000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffda5305340 a2=0 a3=0 items=0 ppid=1984 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:22:47.283000 audit[2092]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.283000 audit[2092]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd32a500c0 a2=0 a3=0 items=0 ppid=1984 pid=2092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:22:47.286000 audit[2094]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.286000 audit[2094]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff7b7ef220 a2=0 a3=0 items=0 ppid=1984 pid=2094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.286000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:22:47.289000 audit[2096]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.289000 audit[2096]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe3a2e9e70 a2=0 a3=0 items=0 ppid=1984 pid=2096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:22:47.293000 audit[2098]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.293000 audit[2098]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe24998bc0 a2=0 a3=0 items=0 ppid=1984 pid=2098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.293000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:22:47.296000 audit[2100]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.296000 audit[2100]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc09bb0310 a2=0 a3=0 items=0 ppid=1984 pid=2100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:22:47.299000 audit[2102]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.299000 audit[2102]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc05a20450 a2=0 a3=0 items=0 ppid=1984 pid=2102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:22:47.302000 audit[2104]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.302000 audit[2104]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff8ea70ac0 a2=0 a3=0 items=0 ppid=1984 pid=2104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:22:47.305000 audit[2106]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2106 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.305000 audit[2106]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff205cfcf0 a2=0 a3=0 items=0 ppid=1984 pid=2106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:22:47.309000 audit[2108]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.309000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffe8c04d850 a2=0 a3=0 items=0 ppid=1984 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.309000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:22:47.312000 audit[2110]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.312000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff075dee60 a2=0 a3=0 items=0 ppid=1984 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.312000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:22:47.316000 audit[2112]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.316000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fffd59d0ca0 a2=0 a3=0 items=0 ppid=1984 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.316000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:22:47.319000 audit[2114]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.319000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff5d32d2e0 a2=0 a3=0 items=0 ppid=1984 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:22:47.322000 audit[2116]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.322000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffd0c0b6820 a2=0 a3=0 items=0 ppid=1984 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:22:47.330000 audit[2121]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.330000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc391cfac0 a2=0 a3=0 items=0 ppid=1984 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:22:47.333000 audit[2123]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.333000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdafd28bb0 a2=0 a3=0 items=0 ppid=1984 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.333000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:22:47.337000 audit[2125]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.337000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe0fc5fab0 a2=0 a3=0 items=0 ppid=1984 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.337000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:22:47.340000 audit[2127]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.340000 audit[2127]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc4cfeed0 a2=0 a3=0 items=0 ppid=1984 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.340000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:22:47.344000 audit[2129]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.344000 audit[2129]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffea694a3a0 a2=0 a3=0 items=0 ppid=1984 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.344000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:22:47.347000 audit[2131]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2131 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:22:47.347000 audit[2131]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe30522af0 a2=0 a3=0 items=0 ppid=1984 pid=2131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.347000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:22:47.365553 systemd-timesyncd[1527]: Network configuration changed, trying to establish connection. Jan 14 01:22:47.379000 audit[2135]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.379000 audit[2135]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffe27ce0830 a2=0 a3=0 items=0 ppid=1984 pid=2135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.379000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:22:47.386000 audit[2137]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.386000 audit[2137]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff14433740 a2=0 a3=0 items=0 ppid=1984 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:22:47.400000 audit[2145]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.400000 audit[2145]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff7a30bf70 a2=0 a3=0 items=0 ppid=1984 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.400000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:22:47.414000 audit[2151]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.414000 audit[2151]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc8420a080 a2=0 a3=0 items=0 ppid=1984 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.414000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:22:47.418000 audit[2153]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.418000 audit[2153]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffd968f5820 a2=0 a3=0 items=0 ppid=1984 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:22:47.422000 audit[2155]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.422000 audit[2155]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffceb1dd30 a2=0 a3=0 items=0 ppid=1984 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:22:47.425000 audit[2157]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.425000 audit[2157]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcf83f2e60 a2=0 a3=0 items=0 ppid=1984 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.425000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:22:47.428000 audit[2159]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:22:47.428000 audit[2159]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd2976c120 a2=0 a3=0 items=0 ppid=1984 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:22:47.428000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:22:47.431227 systemd-networkd[1555]: docker0: Link UP Jan 14 01:22:47.445278 dockerd[1984]: time="2026-01-14T01:22:47.445005836Z" level=info msg="Loading containers: done." Jan 14 01:22:47.463252 systemd-timesyncd[1527]: Contacted time server [2a00:da00:1800:83b0::1]:123 (2.flatcar.pool.ntp.org). Jan 14 01:22:47.463380 systemd-timesyncd[1527]: Initial clock synchronization to Wed 2026-01-14 01:22:47.627204 UTC. Jan 14 01:22:47.474766 dockerd[1984]: time="2026-01-14T01:22:47.474679839Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:22:47.475080 dockerd[1984]: time="2026-01-14T01:22:47.474959795Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:22:47.475307 dockerd[1984]: time="2026-01-14T01:22:47.475267670Z" level=info msg="Initializing buildkit" Jan 14 01:22:47.519919 dockerd[1984]: time="2026-01-14T01:22:47.519834057Z" level=info msg="Completed buildkit initialization" Jan 14 01:22:47.533062 dockerd[1984]: time="2026-01-14T01:22:47.532989811Z" level=info msg="Daemon has completed initialization" Jan 14 01:22:47.533943 dockerd[1984]: time="2026-01-14T01:22:47.533212983Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:22:47.534509 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:22:47.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:47.993747 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1209436845-merged.mount: Deactivated successfully. Jan 14 01:22:48.711296 containerd[1645]: time="2026-01-14T01:22:48.711029085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 01:22:49.485293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3211491902.mount: Deactivated successfully. Jan 14 01:22:53.556256 containerd[1645]: time="2026-01-14T01:22:53.556101044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:53.560125 containerd[1645]: time="2026-01-14T01:22:53.560074896Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 14 01:22:53.561927 containerd[1645]: time="2026-01-14T01:22:53.561855208Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:53.567600 containerd[1645]: time="2026-01-14T01:22:53.567536728Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:53.571559 containerd[1645]: time="2026-01-14T01:22:53.570201173Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.858969793s" Jan 14 01:22:53.571559 containerd[1645]: time="2026-01-14T01:22:53.570313935Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 01:22:53.572255 containerd[1645]: time="2026-01-14T01:22:53.572183588Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 01:22:55.158990 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 01:22:55.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:55.165129 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 01:22:55.165587 kernel: audit: type=1131 audit(1768353775.158:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:55.178000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:22:55.180847 kernel: audit: type=1334 audit(1768353775.178:284): prog-id=61 op=UNLOAD Jan 14 01:22:56.037711 containerd[1645]: time="2026-01-14T01:22:56.037611904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:56.039639 containerd[1645]: time="2026-01-14T01:22:56.039321899Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 01:22:56.040544 containerd[1645]: time="2026-01-14T01:22:56.040501987Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:56.044585 containerd[1645]: time="2026-01-14T01:22:56.044544155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:56.046184 containerd[1645]: time="2026-01-14T01:22:56.046141136Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.473908316s" Jan 14 01:22:56.046299 containerd[1645]: time="2026-01-14T01:22:56.046189354Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 01:22:56.047735 containerd[1645]: time="2026-01-14T01:22:56.047683781Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 01:22:56.634302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 01:22:56.637314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:22:56.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:56.893330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:22:56.902901 kernel: audit: type=1130 audit(1768353776.892:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:22:56.914079 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:22:57.035867 kubelet[2274]: E0114 01:22:57.034072 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:22:57.038547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:22:57.039030 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:22:57.039000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:22:57.040468 systemd[1]: kubelet.service: Consumed 265ms CPU time, 108.8M memory peak. Jan 14 01:22:57.044931 kernel: audit: type=1131 audit(1768353777.039:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:22:58.572263 containerd[1645]: time="2026-01-14T01:22:58.572124453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:58.580084 containerd[1645]: time="2026-01-14T01:22:58.580033072Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 14 01:22:58.582021 containerd[1645]: time="2026-01-14T01:22:58.581949939Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:58.585880 containerd[1645]: time="2026-01-14T01:22:58.585130373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:22:58.586700 containerd[1645]: time="2026-01-14T01:22:58.586658424Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.538923298s" Jan 14 01:22:58.586814 containerd[1645]: time="2026-01-14T01:22:58.586710205Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 01:22:58.588834 containerd[1645]: time="2026-01-14T01:22:58.588648549Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 01:23:00.450225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473775582.mount: Deactivated successfully. Jan 14 01:23:01.478095 containerd[1645]: time="2026-01-14T01:23:01.477772142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:01.479195 containerd[1645]: time="2026-01-14T01:23:01.478939166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 01:23:01.480831 containerd[1645]: time="2026-01-14T01:23:01.479924977Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:01.482296 containerd[1645]: time="2026-01-14T01:23:01.482234117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:01.483452 containerd[1645]: time="2026-01-14T01:23:01.483193266Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 2.894098738s" Jan 14 01:23:01.483452 containerd[1645]: time="2026-01-14T01:23:01.483249136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 01:23:01.484372 containerd[1645]: time="2026-01-14T01:23:01.484155933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 01:23:02.054334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556395104.mount: Deactivated successfully. Jan 14 01:23:03.847822 containerd[1645]: time="2026-01-14T01:23:03.847706581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:03.849995 containerd[1645]: time="2026-01-14T01:23:03.849953641Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17569900" Jan 14 01:23:03.850695 containerd[1645]: time="2026-01-14T01:23:03.850619326Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:03.856322 containerd[1645]: time="2026-01-14T01:23:03.856212767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:03.858647 containerd[1645]: time="2026-01-14T01:23:03.857957048Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.373757405s" Jan 14 01:23:03.858647 containerd[1645]: time="2026-01-14T01:23:03.858041814Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 01:23:03.858647 containerd[1645]: time="2026-01-14T01:23:03.858639358Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:23:04.752485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538727698.mount: Deactivated successfully. Jan 14 01:23:04.764850 containerd[1645]: time="2026-01-14T01:23:04.764754784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:23:04.766802 containerd[1645]: time="2026-01-14T01:23:04.766743796Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:23:04.768812 containerd[1645]: time="2026-01-14T01:23:04.768609237Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:23:04.772865 containerd[1645]: time="2026-01-14T01:23:04.772074946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:23:04.773563 containerd[1645]: time="2026-01-14T01:23:04.773393577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 914.721063ms" Jan 14 01:23:04.773563 containerd[1645]: time="2026-01-14T01:23:04.773434434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 01:23:04.774917 containerd[1645]: time="2026-01-14T01:23:04.774850244Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 01:23:05.633174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2491673904.mount: Deactivated successfully. Jan 14 01:23:07.133760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 01:23:07.137479 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:23:07.402287 update_engine[1621]: I20260114 01:23:07.401973 1621 update_attempter.cc:509] Updating boot flags... Jan 14 01:23:07.542393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:07.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:07.550518 kernel: audit: type=1130 audit(1768353787.542:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:07.562985 (kubelet)[2412]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:23:07.754096 kubelet[2412]: E0114 01:23:07.753192 2412 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:23:07.757369 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:23:07.759418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:23:07.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:23:07.761298 systemd[1]: kubelet.service: Consumed 254ms CPU time, 107.6M memory peak. Jan 14 01:23:07.767021 kernel: audit: type=1131 audit(1768353787.759:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:23:11.572836 containerd[1645]: time="2026-01-14T01:23:11.571818230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:11.575182 containerd[1645]: time="2026-01-14T01:23:11.575149195Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 14 01:23:11.576599 containerd[1645]: time="2026-01-14T01:23:11.576553841Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:11.585807 containerd[1645]: time="2026-01-14T01:23:11.584683517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:11.586141 containerd[1645]: time="2026-01-14T01:23:11.586101572Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 6.811213501s" Jan 14 01:23:11.586280 containerd[1645]: time="2026-01-14T01:23:11.586251899Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 01:23:15.738156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:15.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:15.739187 systemd[1]: kubelet.service: Consumed 254ms CPU time, 107.6M memory peak. Jan 14 01:23:15.746813 kernel: audit: type=1130 audit(1768353795.738:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:15.746966 kernel: audit: type=1131 audit(1768353795.738:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:15.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:15.750127 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:23:15.791711 systemd[1]: Reload requested from client PID 2465 ('systemctl') (unit session-12.scope)... Jan 14 01:23:15.791768 systemd[1]: Reloading... Jan 14 01:23:16.052833 zram_generator::config[2513]: No configuration found. Jan 14 01:23:16.436734 systemd[1]: Reloading finished in 644 ms. Jan 14 01:23:16.465000 audit: BPF prog-id=65 op=LOAD Jan 14 01:23:16.479106 kernel: audit: type=1334 audit(1768353796.465:291): prog-id=65 op=LOAD Jan 14 01:23:16.479194 kernel: audit: type=1334 audit(1768353796.465:292): prog-id=58 op=UNLOAD Jan 14 01:23:16.465000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:23:16.469000 audit: BPF prog-id=66 op=LOAD Jan 14 01:23:16.469000 audit: BPF prog-id=67 op=LOAD Jan 14 01:23:16.482327 kernel: audit: type=1334 audit(1768353796.469:293): prog-id=66 op=LOAD Jan 14 01:23:16.482401 kernel: audit: type=1334 audit(1768353796.469:294): prog-id=67 op=LOAD Jan 14 01:23:16.469000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:23:16.469000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:23:16.486856 kernel: audit: type=1334 audit(1768353796.469:295): prog-id=59 op=UNLOAD Jan 14 01:23:16.486936 kernel: audit: type=1334 audit(1768353796.469:296): prog-id=60 op=UNLOAD Jan 14 01:23:16.486994 kernel: audit: type=1334 audit(1768353796.470:297): prog-id=68 op=LOAD Jan 14 01:23:16.470000 audit: BPF prog-id=68 op=LOAD Jan 14 01:23:16.489192 kernel: audit: type=1334 audit(1768353796.470:298): prog-id=57 op=UNLOAD Jan 14 01:23:16.470000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:23:16.471000 audit: BPF prog-id=69 op=LOAD Jan 14 01:23:16.471000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:23:16.476000 audit: BPF prog-id=70 op=LOAD Jan 14 01:23:16.476000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:23:16.478000 audit: BPF prog-id=71 op=LOAD Jan 14 01:23:16.478000 audit: BPF prog-id=41 op=UNLOAD Jan 14 01:23:16.478000 audit: BPF prog-id=72 op=LOAD Jan 14 01:23:16.478000 audit: BPF prog-id=73 op=LOAD Jan 14 01:23:16.478000 audit: BPF prog-id=42 op=UNLOAD Jan 14 01:23:16.478000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:23:16.480000 audit: BPF prog-id=74 op=LOAD Jan 14 01:23:16.480000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:23:16.480000 audit: BPF prog-id=75 op=LOAD Jan 14 01:23:16.480000 audit: BPF prog-id=76 op=LOAD Jan 14 01:23:16.481000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:23:16.481000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:23:16.483000 audit: BPF prog-id=77 op=LOAD Jan 14 01:23:16.483000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:23:16.490000 audit: BPF prog-id=78 op=LOAD Jan 14 01:23:16.490000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:23:16.491000 audit: BPF prog-id=79 op=LOAD Jan 14 01:23:16.491000 audit: BPF prog-id=80 op=LOAD Jan 14 01:23:16.491000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:23:16.491000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:23:16.502000 audit: BPF prog-id=81 op=LOAD Jan 14 01:23:16.502000 audit: BPF prog-id=82 op=LOAD Jan 14 01:23:16.502000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:23:16.502000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:23:16.503000 audit: BPF prog-id=83 op=LOAD Jan 14 01:23:16.503000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:23:16.503000 audit: BPF prog-id=84 op=LOAD Jan 14 01:23:16.503000 audit: BPF prog-id=85 op=LOAD Jan 14 01:23:16.503000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:23:16.503000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:23:16.529453 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:23:16.529593 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:23:16.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:23:16.530152 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:16.530274 systemd[1]: kubelet.service: Consumed 179ms CPU time, 98.4M memory peak. Jan 14 01:23:16.532849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:23:16.871710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:16.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:16.887690 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:23:16.960212 kubelet[2580]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:23:16.960212 kubelet[2580]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:23:16.960212 kubelet[2580]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:23:16.963484 kubelet[2580]: I0114 01:23:16.963407 2580 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:23:17.832831 kubelet[2580]: I0114 01:23:17.832307 2580 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:23:17.832831 kubelet[2580]: I0114 01:23:17.832362 2580 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:23:17.833298 kubelet[2580]: I0114 01:23:17.833194 2580 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:23:17.873838 kubelet[2580]: E0114 01:23:17.873595 2580 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.32.214:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:17.874120 kubelet[2580]: I0114 01:23:17.874016 2580 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:23:17.904737 kubelet[2580]: I0114 01:23:17.904683 2580 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:23:17.919549 kubelet[2580]: I0114 01:23:17.918987 2580 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:23:17.926808 kubelet[2580]: I0114 01:23:17.926716 2580 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:23:17.927427 kubelet[2580]: I0114 01:23:17.926935 2580 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-aufav.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:23:17.929571 kubelet[2580]: I0114 01:23:17.929543 2580 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:23:17.930061 kubelet[2580]: I0114 01:23:17.929727 2580 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:23:17.931160 kubelet[2580]: I0114 01:23:17.931134 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:23:17.935180 kubelet[2580]: I0114 01:23:17.935155 2580 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:23:17.935499 kubelet[2580]: I0114 01:23:17.935464 2580 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:23:17.937458 kubelet[2580]: I0114 01:23:17.937209 2580 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:23:17.937458 kubelet[2580]: I0114 01:23:17.937257 2580 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:23:17.956591 kubelet[2580]: W0114 01:23:17.956510 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.32.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aufav.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:17.957552 kubelet[2580]: E0114 01:23:17.957299 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.32.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aufav.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:17.961659 kubelet[2580]: I0114 01:23:17.961628 2580 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:23:17.965809 kubelet[2580]: W0114 01:23:17.964900 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.32.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:17.965809 kubelet[2580]: E0114 01:23:17.964967 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.32.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:17.966376 kubelet[2580]: I0114 01:23:17.966351 2580 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:23:17.967291 kubelet[2580]: W0114 01:23:17.967266 2580 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:23:17.968610 kubelet[2580]: I0114 01:23:17.968585 2580 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:23:17.968815 kubelet[2580]: I0114 01:23:17.968778 2580 server.go:1287] "Started kubelet" Jan 14 01:23:17.975799 kubelet[2580]: I0114 01:23:17.975174 2580 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:23:17.976251 kubelet[2580]: I0114 01:23:17.976184 2580 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:23:17.976950 kubelet[2580]: I0114 01:23:17.976924 2580 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:23:17.978134 kubelet[2580]: I0114 01:23:17.978081 2580 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:23:17.981356 kubelet[2580]: E0114 01:23:17.978472 2580 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.32.214:6443/api/v1/namespaces/default/events\": dial tcp 10.230.32.214:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-aufav.gb1.brightbox.com.188a7468fd2390d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-aufav.gb1.brightbox.com,UID:srv-aufav.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-aufav.gb1.brightbox.com,},FirstTimestamp:2026-01-14 01:23:17.968728274 +0000 UTC m=+1.074149697,LastTimestamp:2026-01-14 01:23:17.968728274 +0000 UTC m=+1.074149697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-aufav.gb1.brightbox.com,}" Jan 14 01:23:17.987484 kubelet[2580]: I0114 01:23:17.986685 2580 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:23:17.987772 kubelet[2580]: I0114 01:23:17.987742 2580 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:23:18.000119 kubelet[2580]: I0114 01:23:18.000043 2580 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:23:18.000000 audit[2591]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.001953 kubelet[2580]: I0114 01:23:18.001908 2580 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:23:18.002031 kubelet[2580]: I0114 01:23:18.002015 2580 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:23:18.000000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff7fac4310 a2=0 a3=0 items=0 ppid=2580 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:23:18.003183 kubelet[2580]: W0114 01:23:18.002647 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.32.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:18.003183 kubelet[2580]: E0114 01:23:18.002711 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.32.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:18.003183 kubelet[2580]: E0114 01:23:18.002953 2580 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:23:18.004015 kubelet[2580]: I0114 01:23:18.003407 2580 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:23:18.004015 kubelet[2580]: E0114 01:23:18.003958 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-aufav.gb1.brightbox.com\" not found" Jan 14 01:23:18.004447 kubelet[2580]: E0114 01:23:18.004398 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aufav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.214:6443: connect: connection refused" interval="200ms" Jan 14 01:23:18.006000 audit[2592]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.006000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77c0b020 a2=0 a3=0 items=0 ppid=2580 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.006000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:23:18.008201 kubelet[2580]: I0114 01:23:18.008132 2580 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:23:18.008201 kubelet[2580]: I0114 01:23:18.008156 2580 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:23:18.013000 audit[2596]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.013000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffcb4a91070 a2=0 a3=0 items=0 ppid=2580 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.013000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:23:18.017000 audit[2598]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.017000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fffaf2ea370 a2=0 a3=0 items=0 ppid=2580 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.017000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:23:18.041000 audit[2603]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.041000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffcf459d4e0 a2=0 a3=0 items=0 ppid=2580 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:23:18.044333 kubelet[2580]: I0114 01:23:18.044260 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:23:18.045000 audit[2607]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.045000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffd0339580 a2=0 a3=0 items=0 ppid=2580 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:23:18.047000 audit[2608]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:18.047000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc75af7190 a2=0 a3=0 items=0 ppid=2580 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.047000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:23:18.048776 kubelet[2580]: I0114 01:23:18.048642 2580 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:23:18.050117 kubelet[2580]: I0114 01:23:18.048953 2580 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:23:18.050117 kubelet[2580]: I0114 01:23:18.049012 2580 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:23:18.050117 kubelet[2580]: I0114 01:23:18.049030 2580 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:23:18.050117 kubelet[2580]: E0114 01:23:18.049128 2580 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:23:18.050117 kubelet[2580]: W0114 01:23:18.050040 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.32.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:18.050117 kubelet[2580]: E0114 01:23:18.050106 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.32.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:18.051293 kubelet[2580]: I0114 01:23:18.051260 2580 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:23:18.051293 kubelet[2580]: I0114 01:23:18.051283 2580 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:23:18.051462 kubelet[2580]: I0114 01:23:18.051439 2580 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:23:18.051000 audit[2609]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.051000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1cb0b5b0 a2=0 a3=0 items=0 ppid=2580 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:23:18.052000 audit[2610]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:18.052000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea66982b0 a2=0 a3=0 items=0 ppid=2580 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:23:18.055643 kubelet[2580]: I0114 01:23:18.055602 2580 policy_none.go:49] "None policy: Start" Jan 14 01:23:18.055643 kubelet[2580]: I0114 01:23:18.055642 2580 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:23:18.055819 kubelet[2580]: I0114 01:23:18.055671 2580 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:23:18.055000 audit[2613]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:18.055000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe92971bc0 a2=0 a3=0 items=0 ppid=2580 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:23:18.055000 audit[2611]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:18.055000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc5b72cf0 a2=0 a3=0 items=0 ppid=2580 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:23:18.059000 audit[2614]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2614 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:18.059000 audit[2614]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe1d15f830 a2=0 a3=0 items=0 ppid=2580 pid=2614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:23:18.068572 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:23:18.083273 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:23:18.093663 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:23:18.106317 kubelet[2580]: I0114 01:23:18.106280 2580 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:23:18.108378 kubelet[2580]: E0114 01:23:18.107547 2580 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-aufav.gb1.brightbox.com\" not found" Jan 14 01:23:18.108933 kubelet[2580]: I0114 01:23:18.108909 2580 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:23:18.109279 kubelet[2580]: I0114 01:23:18.109205 2580 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:23:18.109901 kubelet[2580]: I0114 01:23:18.109880 2580 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:23:18.114033 kubelet[2580]: E0114 01:23:18.113861 2580 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:23:18.114033 kubelet[2580]: E0114 01:23:18.113999 2580 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-aufav.gb1.brightbox.com\" not found" Jan 14 01:23:18.179671 systemd[1]: Created slice kubepods-burstable-pod6637f0d529935bd1d2549d56ce0fc202.slice - libcontainer container kubepods-burstable-pod6637f0d529935bd1d2549d56ce0fc202.slice. Jan 14 01:23:18.199425 kubelet[2580]: E0114 01:23:18.198998 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.202661 kubelet[2580]: I0114 01:23:18.202631 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-kubeconfig\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.202991 kubelet[2580]: I0114 01:23:18.202959 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203188 kubelet[2580]: I0114 01:23:18.203137 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-ca-certs\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203330 kubelet[2580]: I0114 01:23:18.203305 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-ca-certs\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203514 kubelet[2580]: I0114 01:23:18.203483 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-flexvolume-dir\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203806 kubelet[2580]: I0114 01:23:18.203620 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-k8s-certs\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203806 kubelet[2580]: I0114 01:23:18.203664 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-k8s-certs\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203806 kubelet[2580]: I0114 01:23:18.203693 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-usr-share-ca-certificates\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.203806 kubelet[2580]: I0114 01:23:18.203722 2580 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eb397f3cff0f102d4a23e8cff3b21de-kubeconfig\") pod \"kube-scheduler-srv-aufav.gb1.brightbox.com\" (UID: \"4eb397f3cff0f102d4a23e8cff3b21de\") " pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.204254 systemd[1]: Created slice kubepods-burstable-pod4eb397f3cff0f102d4a23e8cff3b21de.slice - libcontainer container kubepods-burstable-pod4eb397f3cff0f102d4a23e8cff3b21de.slice. Jan 14 01:23:18.206635 kubelet[2580]: E0114 01:23:18.206432 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aufav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.214:6443: connect: connection refused" interval="400ms" Jan 14 01:23:18.208713 kubelet[2580]: E0114 01:23:18.208441 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.213422 kubelet[2580]: I0114 01:23:18.213336 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.215423 kubelet[2580]: E0114 01:23:18.215308 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.32.214:6443/api/v1/nodes\": dial tcp 10.230.32.214:6443: connect: connection refused" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.219547 systemd[1]: Created slice kubepods-burstable-pod61ba814d75050e41b9d61d2ffb14513c.slice - libcontainer container kubepods-burstable-pod61ba814d75050e41b9d61d2ffb14513c.slice. Jan 14 01:23:18.223314 kubelet[2580]: E0114 01:23:18.223266 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.418822 kubelet[2580]: I0114 01:23:18.418769 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.419680 kubelet[2580]: E0114 01:23:18.419641 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.32.214:6443/api/v1/nodes\": dial tcp 10.230.32.214:6443: connect: connection refused" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.503321 containerd[1645]: time="2026-01-14T01:23:18.503169296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-aufav.gb1.brightbox.com,Uid:6637f0d529935bd1d2549d56ce0fc202,Namespace:kube-system,Attempt:0,}" Jan 14 01:23:18.510694 containerd[1645]: time="2026-01-14T01:23:18.510458805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-aufav.gb1.brightbox.com,Uid:4eb397f3cff0f102d4a23e8cff3b21de,Namespace:kube-system,Attempt:0,}" Jan 14 01:23:18.526988 containerd[1645]: time="2026-01-14T01:23:18.526918257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-aufav.gb1.brightbox.com,Uid:61ba814d75050e41b9d61d2ffb14513c,Namespace:kube-system,Attempt:0,}" Jan 14 01:23:18.608107 kubelet[2580]: E0114 01:23:18.608027 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aufav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.214:6443: connect: connection refused" interval="800ms" Jan 14 01:23:18.707215 containerd[1645]: time="2026-01-14T01:23:18.706760447Z" level=info msg="connecting to shim 1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0" address="unix:///run/containerd/s/bf9094febd57e5e54d3e6c9a62bd97284d49c8ac0264f46041e83de98c860f81" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:18.716812 containerd[1645]: time="2026-01-14T01:23:18.716587512Z" level=info msg="connecting to shim c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df" address="unix:///run/containerd/s/3d770f9cd3adf36edba01a323182546a12be654d8f89f94d86fa10725b290d32" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:18.727355 containerd[1645]: time="2026-01-14T01:23:18.727153247Z" level=info msg="connecting to shim 78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72" address="unix:///run/containerd/s/9318edb1c0b860fb4ee09a2d7b564da9472f7394d1052041c4fb38f8bd8e9b3d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:18.825254 kubelet[2580]: I0114 01:23:18.825190 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.826711 kubelet[2580]: E0114 01:23:18.826632 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.32.214:6443/api/v1/nodes\": dial tcp 10.230.32.214:6443: connect: connection refused" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:18.847172 systemd[1]: Started cri-containerd-1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0.scope - libcontainer container 1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0. Jan 14 01:23:18.851458 systemd[1]: Started cri-containerd-78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72.scope - libcontainer container 78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72. Jan 14 01:23:18.855026 systemd[1]: Started cri-containerd-c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df.scope - libcontainer container c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df. Jan 14 01:23:18.895248 kubelet[2580]: W0114 01:23:18.895088 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.32.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:18.895248 kubelet[2580]: E0114 01:23:18.895181 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.32.214:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:18.898000 audit: BPF prog-id=86 op=LOAD Jan 14 01:23:18.901000 audit: BPF prog-id=87 op=LOAD Jan 14 01:23:18.901000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.902000 audit: BPF prog-id=87 op=UNLOAD Jan 14 01:23:18.902000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.902000 audit: BPF prog-id=88 op=LOAD Jan 14 01:23:18.902000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.903000 audit: BPF prog-id=89 op=LOAD Jan 14 01:23:18.903000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.903000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:23:18.903000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.903000 audit: BPF prog-id=88 op=UNLOAD Jan 14 01:23:18.903000 audit[2668]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.903000 audit: BPF prog-id=90 op=LOAD Jan 14 01:23:18.903000 audit[2668]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=2644 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613933356430393265346465343462313963626335306133313831 Jan 14 01:23:18.906000 audit: BPF prog-id=91 op=LOAD Jan 14 01:23:18.907000 audit: BPF prog-id=92 op=LOAD Jan 14 01:23:18.907000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.907000 audit: BPF prog-id=92 op=UNLOAD Jan 14 01:23:18.907000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.907000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.908000 audit: BPF prog-id=93 op=LOAD Jan 14 01:23:18.908000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.908000 audit: BPF prog-id=94 op=LOAD Jan 14 01:23:18.908000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.908000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:23:18.908000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.908000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:23:18.908000 audit[2675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.908000 audit: BPF prog-id=95 op=LOAD Jan 14 01:23:18.908000 audit[2675]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=2655 pid=2675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.908000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738653932343638343037383233326266366537666535316562316238 Jan 14 01:23:18.911000 audit: BPF prog-id=96 op=LOAD Jan 14 01:23:18.912000 audit: BPF prog-id=97 op=LOAD Jan 14 01:23:18.912000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.912000 audit: BPF prog-id=97 op=UNLOAD Jan 14 01:23:18.912000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.912000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.913000 audit: BPF prog-id=98 op=LOAD Jan 14 01:23:18.913000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.914000 audit: BPF prog-id=99 op=LOAD Jan 14 01:23:18.914000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.914000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:23:18.914000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.914000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:23:18.914000 audit[2657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:18.914000 audit: BPF prog-id=100 op=LOAD Jan 14 01:23:18.914000 audit[2657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2630 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:18.914000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162326539396161376638386339366234656361653233653963613233 Jan 14 01:23:19.013155 containerd[1645]: time="2026-01-14T01:23:19.013010637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-aufav.gb1.brightbox.com,Uid:6637f0d529935bd1d2549d56ce0fc202,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0\"" Jan 14 01:23:19.018820 containerd[1645]: time="2026-01-14T01:23:19.018638590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-aufav.gb1.brightbox.com,Uid:4eb397f3cff0f102d4a23e8cff3b21de,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df\"" Jan 14 01:23:19.026183 containerd[1645]: time="2026-01-14T01:23:19.026098532Z" level=info msg="CreateContainer within sandbox \"c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:23:19.026772 containerd[1645]: time="2026-01-14T01:23:19.026110523Z" level=info msg="CreateContainer within sandbox \"1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:23:19.028645 containerd[1645]: time="2026-01-14T01:23:19.028605911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-aufav.gb1.brightbox.com,Uid:61ba814d75050e41b9d61d2ffb14513c,Namespace:kube-system,Attempt:0,} returns sandbox id \"78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72\"" Jan 14 01:23:19.032431 containerd[1645]: time="2026-01-14T01:23:19.032374453Z" level=info msg="CreateContainer within sandbox \"78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:23:19.043390 containerd[1645]: time="2026-01-14T01:23:19.043342819Z" level=info msg="Container 3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:19.045395 containerd[1645]: time="2026-01-14T01:23:19.045350270Z" level=info msg="Container 736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:19.049247 containerd[1645]: time="2026-01-14T01:23:19.049199904Z" level=info msg="Container 527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:19.054779 containerd[1645]: time="2026-01-14T01:23:19.054681529Z" level=info msg="CreateContainer within sandbox \"1b2e99aa7f88c96b4ecae23e9ca237a1d92dd5fc6696e5a3ac76bc2e943379d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc\"" Jan 14 01:23:19.055932 containerd[1645]: time="2026-01-14T01:23:19.055865960Z" level=info msg="StartContainer for \"3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc\"" Jan 14 01:23:19.058359 containerd[1645]: time="2026-01-14T01:23:19.058314882Z" level=info msg="CreateContainer within sandbox \"c9a935d092e4de44b19cbc50a318167b0da9883a0a5d53bcf22df0eee6b4b9df\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c\"" Jan 14 01:23:19.061472 containerd[1645]: time="2026-01-14T01:23:19.061438003Z" level=info msg="StartContainer for \"736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c\"" Jan 14 01:23:19.061756 containerd[1645]: time="2026-01-14T01:23:19.061714449Z" level=info msg="connecting to shim 3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc" address="unix:///run/containerd/s/bf9094febd57e5e54d3e6c9a62bd97284d49c8ac0264f46041e83de98c860f81" protocol=ttrpc version=3 Jan 14 01:23:19.067881 containerd[1645]: time="2026-01-14T01:23:19.067844066Z" level=info msg="connecting to shim 736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c" address="unix:///run/containerd/s/3d770f9cd3adf36edba01a323182546a12be654d8f89f94d86fa10725b290d32" protocol=ttrpc version=3 Jan 14 01:23:19.070636 containerd[1645]: time="2026-01-14T01:23:19.070590055Z" level=info msg="CreateContainer within sandbox \"78e924684078232bf6e7fe51eb1b8e6becb0e7a71fef7a0113b1e01da92f2e72\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc\"" Jan 14 01:23:19.072947 containerd[1645]: time="2026-01-14T01:23:19.072915482Z" level=info msg="StartContainer for \"527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc\"" Jan 14 01:23:19.080444 containerd[1645]: time="2026-01-14T01:23:19.080232386Z" level=info msg="connecting to shim 527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc" address="unix:///run/containerd/s/9318edb1c0b860fb4ee09a2d7b564da9472f7394d1052041c4fb38f8bd8e9b3d" protocol=ttrpc version=3 Jan 14 01:23:19.109046 systemd[1]: Started cri-containerd-736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c.scope - libcontainer container 736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c. Jan 14 01:23:19.123110 systemd[1]: Started cri-containerd-3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc.scope - libcontainer container 3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc. Jan 14 01:23:19.139019 systemd[1]: Started cri-containerd-527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc.scope - libcontainer container 527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc. Jan 14 01:23:19.147000 audit: BPF prog-id=101 op=LOAD Jan 14 01:23:19.147000 audit: BPF prog-id=102 op=LOAD Jan 14 01:23:19.147000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.147000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=103 op=LOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=104 op=LOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.148000 audit: BPF prog-id=105 op=LOAD Jan 14 01:23:19.148000 audit[2752]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=2644 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.148000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733366433303566643435653264343439353430376132623436653430 Jan 14 01:23:19.166000 audit: BPF prog-id=106 op=LOAD Jan 14 01:23:19.169000 audit: BPF prog-id=107 op=LOAD Jan 14 01:23:19.169000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.169000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:23:19.169000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.169000 audit: BPF prog-id=108 op=LOAD Jan 14 01:23:19.169000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.169000 audit: BPF prog-id=109 op=LOAD Jan 14 01:23:19.169000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.169000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.171000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:23:19.171000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.171000 audit: BPF prog-id=108 op=UNLOAD Jan 14 01:23:19.171000 audit[2751]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.171000 audit: BPF prog-id=110 op=LOAD Jan 14 01:23:19.171000 audit[2751]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2630 pid=2751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366393362356130356565623534643935373262363065333034386235 Jan 14 01:23:19.181470 kubelet[2580]: W0114 01:23:19.179686 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.32.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:19.182218 kubelet[2580]: E0114 01:23:19.181507 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.32.214:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:19.202000 audit: BPF prog-id=111 op=LOAD Jan 14 01:23:19.202000 audit: BPF prog-id=112 op=LOAD Jan 14 01:23:19.202000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206238 a2=98 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.203000 audit: BPF prog-id=112 op=UNLOAD Jan 14 01:23:19.203000 audit[2759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.203000 audit: BPF prog-id=113 op=LOAD Jan 14 01:23:19.203000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000206488 a2=98 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.203000 audit: BPF prog-id=114 op=LOAD Jan 14 01:23:19.203000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000206218 a2=98 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.203000 audit: BPF prog-id=114 op=UNLOAD Jan 14 01:23:19.203000 audit[2759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.203000 audit: BPF prog-id=113 op=UNLOAD Jan 14 01:23:19.203000 audit[2759]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.204000 audit: BPF prog-id=115 op=LOAD Jan 14 01:23:19.204000 audit[2759]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0002066e8 a2=98 a3=0 items=0 ppid=2655 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:19.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532373434396238303138353238313962383730363136323634643535 Jan 14 01:23:19.263949 containerd[1645]: time="2026-01-14T01:23:19.263328815Z" level=info msg="StartContainer for \"736d305fd45e2d4495407a2b46e409388cd676ed557959e987c80b5b16a0ea7c\" returns successfully" Jan 14 01:23:19.275551 containerd[1645]: time="2026-01-14T01:23:19.275341794Z" level=info msg="StartContainer for \"3f93b5a05eeb54d9572b60e3048b5ab65e105282293fcbdb08f35c2b374d15dc\" returns successfully" Jan 14 01:23:19.303889 containerd[1645]: time="2026-01-14T01:23:19.303840478Z" level=info msg="StartContainer for \"527449b801852819b870616264d556abfbb8b60b9ccd8fd6b3ed845fc9e53ccc\" returns successfully" Jan 14 01:23:19.368057 kubelet[2580]: W0114 01:23:19.367950 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.32.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aufav.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:19.368277 kubelet[2580]: E0114 01:23:19.368087 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.32.214:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-aufav.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:19.408987 kubelet[2580]: E0114 01:23:19.408930 2580 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.32.214:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-aufav.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.32.214:6443: connect: connection refused" interval="1.6s" Jan 14 01:23:19.435895 kubelet[2580]: W0114 01:23:19.435747 2580 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.32.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.32.214:6443: connect: connection refused Jan 14 01:23:19.435895 kubelet[2580]: E0114 01:23:19.435883 2580 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.32.214:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.32.214:6443: connect: connection refused" logger="UnhandledError" Jan 14 01:23:19.632007 kubelet[2580]: I0114 01:23:19.631013 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:19.632720 kubelet[2580]: E0114 01:23:19.632679 2580 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.32.214:6443/api/v1/nodes\": dial tcp 10.230.32.214:6443: connect: connection refused" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:20.098421 kubelet[2580]: E0114 01:23:20.098104 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:20.105087 kubelet[2580]: E0114 01:23:20.105046 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:20.108387 kubelet[2580]: E0114 01:23:20.108361 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:21.110955 kubelet[2580]: E0114 01:23:21.110447 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:21.112754 kubelet[2580]: E0114 01:23:21.112002 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:21.113879 kubelet[2580]: E0114 01:23:21.113656 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:21.240080 kubelet[2580]: I0114 01:23:21.239028 2580 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.113828 kubelet[2580]: E0114 01:23:22.112484 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.113828 kubelet[2580]: E0114 01:23:22.112550 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.160005 kubelet[2580]: E0114 01:23:22.158943 2580 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.453763 kubelet[2580]: E0114 01:23:22.453707 2580 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-aufav.gb1.brightbox.com\" not found" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.487779 kubelet[2580]: E0114 01:23:22.487563 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-aufav.gb1.brightbox.com.188a7468fd2390d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-aufav.gb1.brightbox.com,UID:srv-aufav.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-aufav.gb1.brightbox.com,},FirstTimestamp:2026-01-14 01:23:17.968728274 +0000 UTC m=+1.074149697,LastTimestamp:2026-01-14 01:23:17.968728274 +0000 UTC m=+1.074149697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-aufav.gb1.brightbox.com,}" Jan 14 01:23:22.541505 kubelet[2580]: I0114 01:23:22.541071 2580 kubelet_node_status.go:78] "Successfully registered node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.541505 kubelet[2580]: E0114 01:23:22.541141 2580 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-aufav.gb1.brightbox.com\": node \"srv-aufav.gb1.brightbox.com\" not found" Jan 14 01:23:22.546929 kubelet[2580]: E0114 01:23:22.546765 2580 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-aufav.gb1.brightbox.com.188a7468ff2d92d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-aufav.gb1.brightbox.com,UID:srv-aufav.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-aufav.gb1.brightbox.com,},FirstTimestamp:2026-01-14 01:23:18.002938582 +0000 UTC m=+1.108360004,LastTimestamp:2026-01-14 01:23:18.002938582 +0000 UTC m=+1.108360004,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-aufav.gb1.brightbox.com,}" Jan 14 01:23:22.604696 kubelet[2580]: I0114 01:23:22.604645 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.610721 kubelet[2580]: E0114 01:23:22.610672 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-aufav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.610721 kubelet[2580]: I0114 01:23:22.610708 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.613805 kubelet[2580]: E0114 01:23:22.613646 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-aufav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.613805 kubelet[2580]: I0114 01:23:22.613679 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.616534 kubelet[2580]: E0114 01:23:22.616468 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:22.963441 kubelet[2580]: I0114 01:23:22.963373 2580 apiserver.go:52] "Watching apiserver" Jan 14 01:23:23.002801 kubelet[2580]: I0114 01:23:23.002709 2580 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:23:23.110333 kubelet[2580]: I0114 01:23:23.110271 2580 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:23.114266 kubelet[2580]: E0114 01:23:23.114209 2580 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-aufav.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:24.393102 systemd[1]: Reload requested from client PID 2850 ('systemctl') (unit session-12.scope)... Jan 14 01:23:24.393651 systemd[1]: Reloading... Jan 14 01:23:24.539816 zram_generator::config[2904]: No configuration found. Jan 14 01:23:24.926516 systemd[1]: Reloading finished in 532 ms. Jan 14 01:23:24.968051 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:23:24.981407 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:23:24.982108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:24.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:24.988953 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 01:23:24.989057 kernel: audit: type=1131 audit(1768353804.980:395): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:24.985350 systemd[1]: kubelet.service: Consumed 1.694s CPU time, 128.5M memory peak. Jan 14 01:23:24.994062 kernel: audit: type=1334 audit(1768353804.988:396): prog-id=116 op=LOAD Jan 14 01:23:24.994148 kernel: audit: type=1334 audit(1768353804.989:397): prog-id=70 op=UNLOAD Jan 14 01:23:24.988000 audit: BPF prog-id=116 op=LOAD Jan 14 01:23:24.989000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:23:24.989744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:23:24.996000 audit: BPF prog-id=117 op=LOAD Jan 14 01:23:25.000808 kernel: audit: type=1334 audit(1768353804.996:398): prog-id=117 op=LOAD Jan 14 01:23:24.997000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:23:24.997000 audit: BPF prog-id=118 op=LOAD Jan 14 01:23:25.004153 kernel: audit: type=1334 audit(1768353804.997:399): prog-id=65 op=UNLOAD Jan 14 01:23:25.004223 kernel: audit: type=1334 audit(1768353804.997:400): prog-id=118 op=LOAD Jan 14 01:23:24.997000 audit: BPF prog-id=119 op=LOAD Jan 14 01:23:25.006518 kernel: audit: type=1334 audit(1768353804.997:401): prog-id=119 op=LOAD Jan 14 01:23:25.006596 kernel: audit: type=1334 audit(1768353804.997:402): prog-id=66 op=UNLOAD Jan 14 01:23:24.997000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:23:24.997000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:23:25.009233 kernel: audit: type=1334 audit(1768353804.997:403): prog-id=67 op=UNLOAD Jan 14 01:23:25.009328 kernel: audit: type=1334 audit(1768353804.998:404): prog-id=120 op=LOAD Jan 14 01:23:24.998000 audit: BPF prog-id=120 op=LOAD Jan 14 01:23:24.998000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:23:24.998000 audit: BPF prog-id=121 op=LOAD Jan 14 01:23:24.998000 audit: BPF prog-id=122 op=LOAD Jan 14 01:23:24.998000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:23:24.998000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:23:24.999000 audit: BPF prog-id=123 op=LOAD Jan 14 01:23:24.999000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:23:25.001000 audit: BPF prog-id=124 op=LOAD Jan 14 01:23:25.001000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:23:25.001000 audit: BPF prog-id=125 op=LOAD Jan 14 01:23:25.001000 audit: BPF prog-id=126 op=LOAD Jan 14 01:23:25.001000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:23:25.001000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:23:25.002000 audit: BPF prog-id=127 op=LOAD Jan 14 01:23:25.002000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:23:25.003000 audit: BPF prog-id=128 op=LOAD Jan 14 01:23:25.010000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:23:25.010000 audit: BPF prog-id=129 op=LOAD Jan 14 01:23:25.010000 audit: BPF prog-id=130 op=LOAD Jan 14 01:23:25.010000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:23:25.010000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:23:25.011000 audit: BPF prog-id=131 op=LOAD Jan 14 01:23:25.011000 audit: BPF prog-id=132 op=LOAD Jan 14 01:23:25.011000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:23:25.011000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:23:25.012000 audit: BPF prog-id=133 op=LOAD Jan 14 01:23:25.012000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:23:25.012000 audit: BPF prog-id=134 op=LOAD Jan 14 01:23:25.013000 audit: BPF prog-id=135 op=LOAD Jan 14 01:23:25.013000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:23:25.013000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:23:25.013000 audit: BPF prog-id=136 op=LOAD Jan 14 01:23:25.014000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:23:25.397924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:23:25.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:25.414943 (kubelet)[2962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:23:25.570018 kubelet[2962]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:23:25.570018 kubelet[2962]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:23:25.570018 kubelet[2962]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:23:25.571203 kubelet[2962]: I0114 01:23:25.570105 2962 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:23:25.596818 kubelet[2962]: I0114 01:23:25.596588 2962 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 01:23:25.596818 kubelet[2962]: I0114 01:23:25.596630 2962 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:23:25.597339 kubelet[2962]: I0114 01:23:25.597314 2962 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 01:23:25.601545 kubelet[2962]: I0114 01:23:25.601516 2962 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 01:23:25.611494 kubelet[2962]: I0114 01:23:25.611466 2962 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:23:25.632400 kubelet[2962]: I0114 01:23:25.632003 2962 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:23:25.642294 kubelet[2962]: I0114 01:23:25.642258 2962 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:23:25.643099 kubelet[2962]: I0114 01:23:25.643044 2962 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:23:25.643779 kubelet[2962]: I0114 01:23:25.643278 2962 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-aufav.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:23:25.643779 kubelet[2962]: I0114 01:23:25.643587 2962 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:23:25.643779 kubelet[2962]: I0114 01:23:25.643605 2962 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 01:23:25.652465 kubelet[2962]: I0114 01:23:25.652128 2962 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:23:25.659855 kubelet[2962]: I0114 01:23:25.659688 2962 kubelet.go:446] "Attempting to sync node with API server" Jan 14 01:23:25.659855 kubelet[2962]: I0114 01:23:25.659742 2962 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:23:25.659855 kubelet[2962]: I0114 01:23:25.659808 2962 kubelet.go:352] "Adding apiserver pod source" Jan 14 01:23:25.660811 kubelet[2962]: I0114 01:23:25.660114 2962 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:23:25.674131 kubelet[2962]: I0114 01:23:25.673992 2962 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:23:25.681562 kubelet[2962]: I0114 01:23:25.681536 2962 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 01:23:25.690140 kubelet[2962]: I0114 01:23:25.690115 2962 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:23:25.690408 kubelet[2962]: I0114 01:23:25.690389 2962 server.go:1287] "Started kubelet" Jan 14 01:23:25.697838 kubelet[2962]: I0114 01:23:25.691172 2962 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:23:25.697838 kubelet[2962]: I0114 01:23:25.697174 2962 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:23:25.697838 kubelet[2962]: I0114 01:23:25.694440 2962 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:23:25.703531 kubelet[2962]: I0114 01:23:25.703507 2962 server.go:479] "Adding debug handlers to kubelet server" Jan 14 01:23:25.708569 kubelet[2962]: I0114 01:23:25.694389 2962 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:23:25.711433 kubelet[2962]: I0114 01:23:25.694570 2962 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:23:25.714204 kubelet[2962]: I0114 01:23:25.712371 2962 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:23:25.714775 kubelet[2962]: I0114 01:23:25.712384 2962 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:23:25.715348 kubelet[2962]: I0114 01:23:25.715327 2962 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:23:25.716960 kubelet[2962]: I0114 01:23:25.716220 2962 factory.go:221] Registration of the systemd container factory successfully Jan 14 01:23:25.724843 kubelet[2962]: I0114 01:23:25.724589 2962 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:23:25.725038 kubelet[2962]: E0114 01:23:25.716324 2962 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:23:25.732882 kubelet[2962]: I0114 01:23:25.732851 2962 factory.go:221] Registration of the containerd container factory successfully Jan 14 01:23:25.778413 kubelet[2962]: I0114 01:23:25.778076 2962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 01:23:25.790320 kubelet[2962]: I0114 01:23:25.790273 2962 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 01:23:25.792827 kubelet[2962]: I0114 01:23:25.792736 2962 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 01:23:25.792910 kubelet[2962]: I0114 01:23:25.792832 2962 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:23:25.792910 kubelet[2962]: I0114 01:23:25.792848 2962 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 01:23:25.797392 kubelet[2962]: E0114 01:23:25.795397 2962 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:23:25.838354 kubelet[2962]: I0114 01:23:25.838297 2962 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:23:25.838354 kubelet[2962]: I0114 01:23:25.838326 2962 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:23:25.838615 kubelet[2962]: I0114 01:23:25.838407 2962 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:23:25.839130 kubelet[2962]: I0114 01:23:25.838804 2962 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:23:25.839130 kubelet[2962]: I0114 01:23:25.838830 2962 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:23:25.839130 kubelet[2962]: I0114 01:23:25.838915 2962 policy_none.go:49] "None policy: Start" Jan 14 01:23:25.839130 kubelet[2962]: I0114 01:23:25.838943 2962 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:23:25.839130 kubelet[2962]: I0114 01:23:25.839005 2962 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:23:25.839353 kubelet[2962]: I0114 01:23:25.839261 2962 state_mem.go:75] "Updated machine memory state" Jan 14 01:23:25.851367 kubelet[2962]: I0114 01:23:25.851271 2962 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 01:23:25.851655 kubelet[2962]: I0114 01:23:25.851594 2962 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:23:25.851655 kubelet[2962]: I0114 01:23:25.851618 2962 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:23:25.852915 kubelet[2962]: I0114 01:23:25.852720 2962 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:23:25.861009 kubelet[2962]: E0114 01:23:25.859148 2962 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:23:25.897846 kubelet[2962]: I0114 01:23:25.896865 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.903318 kubelet[2962]: I0114 01:23:25.900140 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.903318 kubelet[2962]: I0114 01:23:25.900704 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920055 kubelet[2962]: I0114 01:23:25.918912 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-k8s-certs\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920055 kubelet[2962]: I0114 01:23:25.918997 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-usr-share-ca-certificates\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920055 kubelet[2962]: I0114 01:23:25.919037 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-ca-certs\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920055 kubelet[2962]: I0114 01:23:25.919106 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-flexvolume-dir\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920055 kubelet[2962]: I0114 01:23:25.919165 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-kubeconfig\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920354 kubelet[2962]: I0114 01:23:25.919198 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920354 kubelet[2962]: I0114 01:23:25.919256 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eb397f3cff0f102d4a23e8cff3b21de-kubeconfig\") pod \"kube-scheduler-srv-aufav.gb1.brightbox.com\" (UID: \"4eb397f3cff0f102d4a23e8cff3b21de\") " pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920354 kubelet[2962]: I0114 01:23:25.919285 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6637f0d529935bd1d2549d56ce0fc202-ca-certs\") pod \"kube-apiserver-srv-aufav.gb1.brightbox.com\" (UID: \"6637f0d529935bd1d2549d56ce0fc202\") " pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.920354 kubelet[2962]: I0114 01:23:25.919352 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ba814d75050e41b9d61d2ffb14513c-k8s-certs\") pod \"kube-controller-manager-srv-aufav.gb1.brightbox.com\" (UID: \"61ba814d75050e41b9d61d2ffb14513c\") " pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" Jan 14 01:23:25.921954 kubelet[2962]: W0114 01:23:25.920847 2962 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:23:25.924363 kubelet[2962]: W0114 01:23:25.920851 2962 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:23:25.924363 kubelet[2962]: W0114 01:23:25.920885 2962 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:23:25.988214 kubelet[2962]: I0114 01:23:25.988106 2962 kubelet_node_status.go:75] "Attempting to register node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:26.002807 kubelet[2962]: I0114 01:23:26.002756 2962 kubelet_node_status.go:124] "Node was previously registered" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:26.003376 kubelet[2962]: I0114 01:23:26.003259 2962 kubelet_node_status.go:78] "Successfully registered node" node="srv-aufav.gb1.brightbox.com" Jan 14 01:23:26.673551 kubelet[2962]: I0114 01:23:26.673226 2962 apiserver.go:52] "Watching apiserver" Jan 14 01:23:26.715533 kubelet[2962]: I0114 01:23:26.715460 2962 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:23:26.837171 kubelet[2962]: I0114 01:23:26.837087 2962 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:26.844774 kubelet[2962]: W0114 01:23:26.844685 2962 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 01:23:26.845319 kubelet[2962]: E0114 01:23:26.845085 2962 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-aufav.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" Jan 14 01:23:26.891886 kubelet[2962]: I0114 01:23:26.891804 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-aufav.gb1.brightbox.com" podStartSLOduration=1.8917574209999999 podStartE2EDuration="1.891757421s" podCreationTimestamp="2026-01-14 01:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:23:26.891398409 +0000 UTC m=+1.431256423" watchObservedRunningTime="2026-01-14 01:23:26.891757421 +0000 UTC m=+1.431615411" Jan 14 01:23:26.912323 kubelet[2962]: I0114 01:23:26.912252 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-aufav.gb1.brightbox.com" podStartSLOduration=1.912212115 podStartE2EDuration="1.912212115s" podCreationTimestamp="2026-01-14 01:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:23:26.9110288 +0000 UTC m=+1.450886808" watchObservedRunningTime="2026-01-14 01:23:26.912212115 +0000 UTC m=+1.452070106" Jan 14 01:23:26.912597 kubelet[2962]: I0114 01:23:26.912371 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-aufav.gb1.brightbox.com" podStartSLOduration=1.9123597399999999 podStartE2EDuration="1.91235974s" podCreationTimestamp="2026-01-14 01:23:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:23:26.901917703 +0000 UTC m=+1.441775712" watchObservedRunningTime="2026-01-14 01:23:26.91235974 +0000 UTC m=+1.452217743" Jan 14 01:23:29.798103 kubelet[2962]: I0114 01:23:29.797906 2962 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:23:29.799172 containerd[1645]: time="2026-01-14T01:23:29.799024494Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:23:29.800150 kubelet[2962]: I0114 01:23:29.799259 2962 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:23:30.725601 systemd[1]: Created slice kubepods-besteffort-podd67a5b8b_fb6d_4d22_8cdd_06446932981f.slice - libcontainer container kubepods-besteffort-podd67a5b8b_fb6d_4d22_8cdd_06446932981f.slice. Jan 14 01:23:30.748699 kubelet[2962]: I0114 01:23:30.748631 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvmv5\" (UniqueName: \"kubernetes.io/projected/d67a5b8b-fb6d-4d22-8cdd-06446932981f-kube-api-access-bvmv5\") pod \"kube-proxy-kbgsx\" (UID: \"d67a5b8b-fb6d-4d22-8cdd-06446932981f\") " pod="kube-system/kube-proxy-kbgsx" Jan 14 01:23:30.748857 kubelet[2962]: I0114 01:23:30.748710 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d67a5b8b-fb6d-4d22-8cdd-06446932981f-kube-proxy\") pod \"kube-proxy-kbgsx\" (UID: \"d67a5b8b-fb6d-4d22-8cdd-06446932981f\") " pod="kube-system/kube-proxy-kbgsx" Jan 14 01:23:30.748857 kubelet[2962]: I0114 01:23:30.748754 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d67a5b8b-fb6d-4d22-8cdd-06446932981f-xtables-lock\") pod \"kube-proxy-kbgsx\" (UID: \"d67a5b8b-fb6d-4d22-8cdd-06446932981f\") " pod="kube-system/kube-proxy-kbgsx" Jan 14 01:23:30.749011 kubelet[2962]: I0114 01:23:30.748860 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d67a5b8b-fb6d-4d22-8cdd-06446932981f-lib-modules\") pod \"kube-proxy-kbgsx\" (UID: \"d67a5b8b-fb6d-4d22-8cdd-06446932981f\") " pod="kube-system/kube-proxy-kbgsx" Jan 14 01:23:30.867134 systemd[1]: Created slice kubepods-besteffort-podbbe62750_8893_4c55_ba42_c904527196f2.slice - libcontainer container kubepods-besteffort-podbbe62750_8893_4c55_ba42_c904527196f2.slice. Jan 14 01:23:30.950210 kubelet[2962]: I0114 01:23:30.950126 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ccm6\" (UniqueName: \"kubernetes.io/projected/bbe62750-8893-4c55-ba42-c904527196f2-kube-api-access-4ccm6\") pod \"tigera-operator-7dcd859c48-vgfvq\" (UID: \"bbe62750-8893-4c55-ba42-c904527196f2\") " pod="tigera-operator/tigera-operator-7dcd859c48-vgfvq" Jan 14 01:23:30.950210 kubelet[2962]: I0114 01:23:30.950184 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bbe62750-8893-4c55-ba42-c904527196f2-var-lib-calico\") pod \"tigera-operator-7dcd859c48-vgfvq\" (UID: \"bbe62750-8893-4c55-ba42-c904527196f2\") " pod="tigera-operator/tigera-operator-7dcd859c48-vgfvq" Jan 14 01:23:31.039815 containerd[1645]: time="2026-01-14T01:23:31.039590137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbgsx,Uid:d67a5b8b-fb6d-4d22-8cdd-06446932981f,Namespace:kube-system,Attempt:0,}" Jan 14 01:23:31.090053 containerd[1645]: time="2026-01-14T01:23:31.089989071Z" level=info msg="connecting to shim 52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c" address="unix:///run/containerd/s/4445b4d6c81621702ef398200ce23b77b53073efa3d35a5e65495564ab7d1ad7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:31.140072 systemd[1]: Started cri-containerd-52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c.scope - libcontainer container 52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c. Jan 14 01:23:31.156000 audit: BPF prog-id=137 op=LOAD Jan 14 01:23:31.160862 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 01:23:31.160947 kernel: audit: type=1334 audit(1768353811.156:439): prog-id=137 op=LOAD Jan 14 01:23:31.164364 kernel: audit: type=1334 audit(1768353811.162:440): prog-id=138 op=LOAD Jan 14 01:23:31.162000 audit: BPF prog-id=138 op=LOAD Jan 14 01:23:31.162000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.173413 kernel: audit: type=1300 audit(1768353811.162:440): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.173492 kernel: audit: type=1327 audit(1768353811.162:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: BPF prog-id=138 op=UNLOAD Jan 14 01:23:31.177313 kernel: audit: type=1334 audit(1768353811.163:441): prog-id=138 op=UNLOAD Jan 14 01:23:31.163000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.180171 kernel: audit: type=1300 audit(1768353811.163:441): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.187866 containerd[1645]: time="2026-01-14T01:23:31.187523499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vgfvq,Uid:bbe62750-8893-4c55-ba42-c904527196f2,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:23:31.189807 kernel: audit: type=1327 audit(1768353811.163:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: BPF prog-id=139 op=LOAD Jan 14 01:23:31.163000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.193578 kernel: audit: type=1334 audit(1768353811.163:442): prog-id=139 op=LOAD Jan 14 01:23:31.193645 kernel: audit: type=1300 audit(1768353811.163:442): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.200121 kernel: audit: type=1327 audit(1768353811.163:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: BPF prog-id=140 op=LOAD Jan 14 01:23:31.163000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: BPF prog-id=140 op=UNLOAD Jan 14 01:23:31.163000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.163000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:23:31.163000 audit[3031]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.164000 audit: BPF prog-id=141 op=LOAD Jan 14 01:23:31.164000 audit[3031]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=3018 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532323031633061373035333038393939656532303332386164326130 Jan 14 01:23:31.229263 containerd[1645]: time="2026-01-14T01:23:31.228624814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbgsx,Uid:d67a5b8b-fb6d-4d22-8cdd-06446932981f,Namespace:kube-system,Attempt:0,} returns sandbox id \"52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c\"" Jan 14 01:23:31.234901 containerd[1645]: time="2026-01-14T01:23:31.234761290Z" level=info msg="CreateContainer within sandbox \"52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:23:31.237098 containerd[1645]: time="2026-01-14T01:23:31.236526016Z" level=info msg="connecting to shim 27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05" address="unix:///run/containerd/s/527f1779cf2247a0197b57ca89e1f06dc89c5fef8dacac791f31e7a04dbc0a68" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:31.265702 containerd[1645]: time="2026-01-14T01:23:31.265633128Z" level=info msg="Container 7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:31.281083 systemd[1]: Started cri-containerd-27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05.scope - libcontainer container 27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05. Jan 14 01:23:31.283814 containerd[1645]: time="2026-01-14T01:23:31.283736376Z" level=info msg="CreateContainer within sandbox \"52201c0a705308999ee20328ad2a06b327e9f71454007e67b976140abc1f217c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed\"" Jan 14 01:23:31.285156 containerd[1645]: time="2026-01-14T01:23:31.284535473Z" level=info msg="StartContainer for \"7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed\"" Jan 14 01:23:31.289340 containerd[1645]: time="2026-01-14T01:23:31.289306660Z" level=info msg="connecting to shim 7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed" address="unix:///run/containerd/s/4445b4d6c81621702ef398200ce23b77b53073efa3d35a5e65495564ab7d1ad7" protocol=ttrpc version=3 Jan 14 01:23:31.312000 audit: BPF prog-id=142 op=LOAD Jan 14 01:23:31.313000 audit: BPF prog-id=143 op=LOAD Jan 14 01:23:31.313000 audit[3076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.313000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.314000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:23:31.314000 audit[3076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.314000 audit: BPF prog-id=144 op=LOAD Jan 14 01:23:31.314000 audit[3076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.315000 audit: BPF prog-id=145 op=LOAD Jan 14 01:23:31.315000 audit[3076]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.316000 audit: BPF prog-id=145 op=UNLOAD Jan 14 01:23:31.316000 audit[3076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.316000 audit: BPF prog-id=144 op=UNLOAD Jan 14 01:23:31.316000 audit[3076]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.316000 audit: BPF prog-id=146 op=LOAD Jan 14 01:23:31.316000 audit[3076]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3064 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237623332363932383661646462373163333739333530313733633932 Jan 14 01:23:31.331049 systemd[1]: Started cri-containerd-7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed.scope - libcontainer container 7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed. Jan 14 01:23:31.409000 audit: BPF prog-id=147 op=LOAD Jan 14 01:23:31.409000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3018 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373762616365623739343135613232313763613161616566323835 Jan 14 01:23:31.409000 audit: BPF prog-id=148 op=LOAD Jan 14 01:23:31.409000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3018 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373762616365623739343135613232313763613161616566323835 Jan 14 01:23:31.409000 audit: BPF prog-id=148 op=UNLOAD Jan 14 01:23:31.409000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373762616365623739343135613232313763613161616566323835 Jan 14 01:23:31.409000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:23:31.409000 audit[3095]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3018 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373762616365623739343135613232313763613161616566323835 Jan 14 01:23:31.409000 audit: BPF prog-id=149 op=LOAD Jan 14 01:23:31.409000 audit[3095]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3018 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.409000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735373762616365623739343135613232313763613161616566323835 Jan 14 01:23:31.416834 containerd[1645]: time="2026-01-14T01:23:31.416747854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-vgfvq,Uid:bbe62750-8893-4c55-ba42-c904527196f2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05\"" Jan 14 01:23:31.423895 containerd[1645]: time="2026-01-14T01:23:31.423818161Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:23:31.452431 containerd[1645]: time="2026-01-14T01:23:31.452301095Z" level=info msg="StartContainer for \"7577baceb79415a2217ca1aaef2859cd04485ccd94aad3019c73eecbe23e77ed\" returns successfully" Jan 14 01:23:31.943000 audit[3164]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:31.943000 audit[3164]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd8a716b50 a2=0 a3=7ffd8a716b3c items=0 ppid=3108 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.943000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:23:31.947000 audit[3166]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3166 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:31.947000 audit[3166]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce6eda8a0 a2=0 a3=7ffce6eda88c items=0 ppid=3108 pid=3166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:23:31.947000 audit[3167]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:31.947000 audit[3167]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcedea6550 a2=0 a3=7ffcedea653c items=0 ppid=3108 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:23:31.950000 audit[3169]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:31.951000 audit[3168]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3168 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:31.951000 audit[3168]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff914d9f70 a2=0 a3=ea5c7c1f272a9ec items=0 ppid=3108 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:23:31.950000 audit[3169]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff95de9820 a2=0 a3=7fff95de980c items=0 ppid=3108 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.950000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:23:31.953000 audit[3170]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:31.953000 audit[3170]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3cb62230 a2=0 a3=7ffd3cb6221c items=0 ppid=3108 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:31.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:23:32.061000 audit[3171]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.061000 audit[3171]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd2338c680 a2=0 a3=7ffd2338c66c items=0 ppid=3108 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:23:32.067000 audit[3173]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.067000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffef4b25cc0 a2=0 a3=7ffef4b25cac items=0 ppid=3108 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:23:32.073000 audit[3176]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.073000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffc7cf2640 a2=0 a3=7fffc7cf262c items=0 ppid=3108 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.073000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:23:32.075000 audit[3177]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.075000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff72824180 a2=0 a3=7fff7282416c items=0 ppid=3108 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:23:32.079000 audit[3179]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.079000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc77d51cc0 a2=0 a3=7ffc77d51cac items=0 ppid=3108 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:23:32.081000 audit[3180]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.081000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc63c0a550 a2=0 a3=7ffc63c0a53c items=0 ppid=3108 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:23:32.086000 audit[3182]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.086000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe468adc80 a2=0 a3=7ffe468adc6c items=0 ppid=3108 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:23:32.092000 audit[3185]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.092000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcc38fd550 a2=0 a3=7ffcc38fd53c items=0 ppid=3108 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:23:32.095000 audit[3186]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.095000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeaa10c230 a2=0 a3=7ffeaa10c21c items=0 ppid=3108 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.095000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:23:32.099000 audit[3188]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.099000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff80c3d5c0 a2=0 a3=7fff80c3d5ac items=0 ppid=3108 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.099000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:23:32.101000 audit[3189]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.101000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3af9b100 a2=0 a3=7ffe3af9b0ec items=0 ppid=3108 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:23:32.106000 audit[3191]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.106000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecda318a0 a2=0 a3=7ffecda3188c items=0 ppid=3108 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:23:32.112000 audit[3194]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.112000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffe6a368e0 a2=0 a3=7fffe6a368cc items=0 ppid=3108 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:23:32.118000 audit[3197]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.118000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe66159920 a2=0 a3=7ffe6615990c items=0 ppid=3108 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:23:32.120000 audit[3198]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.120000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffca2571080 a2=0 a3=7ffca257106c items=0 ppid=3108 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:23:32.127000 audit[3200]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.127000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe8f879580 a2=0 a3=7ffe8f87956c items=0 ppid=3108 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:23:32.133000 audit[3203]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.133000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff174712a0 a2=0 a3=7fff1747128c items=0 ppid=3108 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:23:32.136000 audit[3204]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.136000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffffa584d30 a2=0 a3=7ffffa584d1c items=0 ppid=3108 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:23:32.140000 audit[3206]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:23:32.140000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fffdbb8aa70 a2=0 a3=7fffdbb8aa5c items=0 ppid=3108 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:23:32.172000 audit[3212]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:32.172000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdf2ebe470 a2=0 a3=7ffdf2ebe45c items=0 ppid=3108 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:32.181000 audit[3212]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:32.181000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffdf2ebe470 a2=0 a3=7ffdf2ebe45c items=0 ppid=3108 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:32.184000 audit[3217]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.184000 audit[3217]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd5c1894f0 a2=0 a3=7ffd5c1894dc items=0 ppid=3108 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:23:32.190000 audit[3219]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.190000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff998122d0 a2=0 a3=7fff998122bc items=0 ppid=3108 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:23:32.196000 audit[3222]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.196000 audit[3222]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffbc4823e0 a2=0 a3=7fffbc4823cc items=0 ppid=3108 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:23:32.199000 audit[3223]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.199000 audit[3223]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe21345420 a2=0 a3=7ffe2134540c items=0 ppid=3108 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.199000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:23:32.204000 audit[3225]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.204000 audit[3225]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff9455db0 a2=0 a3=7ffff9455d9c items=0 ppid=3108 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.204000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:23:32.206000 audit[3226]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.206000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc650c7780 a2=0 a3=7ffc650c776c items=0 ppid=3108 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:23:32.210000 audit[3228]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.210000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc9af65490 a2=0 a3=7ffc9af6547c items=0 ppid=3108 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:23:32.216000 audit[3231]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.216000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff8120f710 a2=0 a3=7fff8120f6fc items=0 ppid=3108 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:23:32.218000 audit[3232]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.218000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7c82dcc0 a2=0 a3=7ffd7c82dcac items=0 ppid=3108 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.218000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:23:32.222000 audit[3234]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.222000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff45fa0690 a2=0 a3=7fff45fa067c items=0 ppid=3108 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.222000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:23:32.224000 audit[3235]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.224000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd129adcf0 a2=0 a3=7ffd129adcdc items=0 ppid=3108 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.224000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:23:32.229000 audit[3237]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.229000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffecd427ca0 a2=0 a3=7ffecd427c8c items=0 ppid=3108 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.229000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:23:32.237000 audit[3240]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.237000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff5ec59d10 a2=0 a3=7fff5ec59cfc items=0 ppid=3108 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.237000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:23:32.245000 audit[3243]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.245000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff73b74690 a2=0 a3=7fff73b7467c items=0 ppid=3108 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:23:32.247000 audit[3244]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.247000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc732b870 a2=0 a3=7fffc732b85c items=0 ppid=3108 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.247000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:23:32.251000 audit[3246]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.251000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff6e4fae90 a2=0 a3=7fff6e4fae7c items=0 ppid=3108 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.251000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:23:32.257000 audit[3249]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.257000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff080ef680 a2=0 a3=7fff080ef66c items=0 ppid=3108 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:23:32.259000 audit[3250]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.259000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff22af9010 a2=0 a3=7fff22af8ffc items=0 ppid=3108 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:23:32.263000 audit[3252]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.263000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff6e353b40 a2=0 a3=7fff6e353b2c items=0 ppid=3108 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:23:32.265000 audit[3253]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.265000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca943ddf0 a2=0 a3=7ffca943dddc items=0 ppid=3108 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:23:32.269000 audit[3255]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.269000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffedddc78d0 a2=0 a3=7ffedddc78bc items=0 ppid=3108 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:23:32.275000 audit[3258]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:23:32.275000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd01f6d710 a2=0 a3=7ffd01f6d6fc items=0 ppid=3108 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.275000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:23:32.285000 audit[3260]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:23:32.285000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffeabc51720 a2=0 a3=7ffeabc5170c items=0 ppid=3108 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.285000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:32.286000 audit[3260]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:23:32.286000 audit[3260]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffeabc51720 a2=0 a3=7ffeabc5170c items=0 ppid=3108 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:32.286000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:33.064677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021390869.mount: Deactivated successfully. Jan 14 01:23:35.028856 containerd[1645]: time="2026-01-14T01:23:35.027755971Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:35.030898 containerd[1645]: time="2026-01-14T01:23:35.030866896Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25055587" Jan 14 01:23:35.032847 containerd[1645]: time="2026-01-14T01:23:35.032730607Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:35.049135 containerd[1645]: time="2026-01-14T01:23:35.047748961Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:35.049135 containerd[1645]: time="2026-01-14T01:23:35.048922457Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.625031879s" Jan 14 01:23:35.049135 containerd[1645]: time="2026-01-14T01:23:35.048973543Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 01:23:35.054737 containerd[1645]: time="2026-01-14T01:23:35.054675695Z" level=info msg="CreateContainer within sandbox \"27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:23:35.079357 containerd[1645]: time="2026-01-14T01:23:35.078154968Z" level=info msg="Container 8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:35.086679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736310586.mount: Deactivated successfully. Jan 14 01:23:35.115075 containerd[1645]: time="2026-01-14T01:23:35.114974010Z" level=info msg="CreateContainer within sandbox \"27b3269286addb71c379350173c921d2e3ff62baed5ccafcd2231577c6409d05\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591\"" Jan 14 01:23:35.116858 containerd[1645]: time="2026-01-14T01:23:35.116097954Z" level=info msg="StartContainer for \"8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591\"" Jan 14 01:23:35.120287 containerd[1645]: time="2026-01-14T01:23:35.120183087Z" level=info msg="connecting to shim 8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591" address="unix:///run/containerd/s/527f1779cf2247a0197b57ca89e1f06dc89c5fef8dacac791f31e7a04dbc0a68" protocol=ttrpc version=3 Jan 14 01:23:35.161160 systemd[1]: Started cri-containerd-8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591.scope - libcontainer container 8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591. Jan 14 01:23:35.191282 kubelet[2962]: I0114 01:23:35.190958 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbgsx" podStartSLOduration=5.190929162 podStartE2EDuration="5.190929162s" podCreationTimestamp="2026-01-14 01:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:23:31.864922124 +0000 UTC m=+6.404780134" watchObservedRunningTime="2026-01-14 01:23:35.190929162 +0000 UTC m=+9.730787164" Jan 14 01:23:35.202000 audit: BPF prog-id=150 op=LOAD Jan 14 01:23:35.204000 audit: BPF prog-id=151 op=LOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=151 op=UNLOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=152 op=LOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=153 op=LOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=153 op=UNLOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.204000 audit: BPF prog-id=154 op=LOAD Jan 14 01:23:35.204000 audit[3269]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3064 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:35.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862383965663464636634333535396664626332306164653932666165 Jan 14 01:23:35.250584 containerd[1645]: time="2026-01-14T01:23:35.250428006Z" level=info msg="StartContainer for \"8b89ef4dcf43559fdbc20ade92fae65d0a7627ae0c9f2162223fff31eb9c7591\" returns successfully" Jan 14 01:23:35.894652 kubelet[2962]: I0114 01:23:35.894441 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-vgfvq" podStartSLOduration=2.264091976 podStartE2EDuration="5.894409722s" podCreationTimestamp="2026-01-14 01:23:30 +0000 UTC" firstStartedPulling="2026-01-14 01:23:31.420357594 +0000 UTC m=+5.960215571" lastFinishedPulling="2026-01-14 01:23:35.05067533 +0000 UTC m=+9.590533317" observedRunningTime="2026-01-14 01:23:35.878357543 +0000 UTC m=+10.418215552" watchObservedRunningTime="2026-01-14 01:23:35.894409722 +0000 UTC m=+10.434267714" Jan 14 01:23:40.718932 sudo[1951]: pam_unix(sudo:session): session closed for user root Jan 14 01:23:40.726854 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:23:40.727062 kernel: audit: type=1106 audit(1768353820.717:519): pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.717000 audit[1951]: USER_END pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.717000 audit[1951]: CRED_DISP pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.736820 kernel: audit: type=1104 audit(1768353820.717:520): pid=1951 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.831388 sshd[1950]: Connection closed by 68.220.241.50 port 35452 Jan 14 01:23:40.834273 sshd-session[1946]: pam_unix(sshd:session): session closed for user core Jan 14 01:23:40.835000 audit[1946]: USER_END pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:23:40.844812 kernel: audit: type=1106 audit(1768353820.835:521): pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:23:40.847586 systemd[1]: sshd@8-10.230.32.214:22-68.220.241.50:35452.service: Deactivated successfully. Jan 14 01:23:40.835000 audit[1946]: CRED_DISP pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:23:40.861810 kernel: audit: type=1104 audit(1768353820.835:522): pid=1946 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:23:40.863053 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:23:40.864282 systemd[1]: session-12.scope: Consumed 6.656s CPU time, 150.9M memory peak. Jan 14 01:23:40.872365 kernel: audit: type=1131 audit(1768353820.846:523): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.32.214:22-68.220.241.50:35452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.32.214:22-68.220.241.50:35452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:23:40.871299 systemd-logind[1619]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:23:40.876097 systemd-logind[1619]: Removed session 12. Jan 14 01:23:41.748000 audit[3350]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.758773 kernel: audit: type=1325 audit(1768353821.748:524): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.748000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd9212c00 a2=0 a3=7fffd9212bec items=0 ppid=3108 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.774805 kernel: audit: type=1300 audit(1768353821.748:524): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffd9212c00 a2=0 a3=7fffd9212bec items=0 ppid=3108 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.748000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:41.782374 kernel: audit: type=1327 audit(1768353821.748:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:41.782454 kernel: audit: type=1325 audit(1768353821.775:525): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.775000 audit[3350]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3350 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.775000 audit[3350]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd9212c00 a2=0 a3=0 items=0 ppid=3108 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.791276 kernel: audit: type=1300 audit(1768353821.775:525): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd9212c00 a2=0 a3=0 items=0 ppid=3108 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:41.803000 audit[3352]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.803000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff3c755830 a2=0 a3=7fff3c75581c items=0 ppid=3108 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:41.808000 audit[3352]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3352 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:41.808000 audit[3352]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff3c755830 a2=0 a3=0 items=0 ppid=3108 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:41.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.715000 audit[3355]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.715000 audit[3355]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd6353bd50 a2=0 a3=7ffd6353bd3c items=0 ppid=3108 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.715000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.730455 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 14 01:23:45.730628 kernel: audit: type=1325 audit(1768353825.721:529): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.721000 audit[3355]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.721000 audit[3355]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6353bd50 a2=0 a3=0 items=0 ppid=3108 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.740821 kernel: audit: type=1300 audit(1768353825.721:529): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6353bd50 a2=0 a3=0 items=0 ppid=3108 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.750804 kernel: audit: type=1327 audit(1768353825.721:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.790000 audit[3357]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.795804 kernel: audit: type=1325 audit(1768353825.790:530): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.790000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff006a0f90 a2=0 a3=7fff006a0f7c items=0 ppid=3108 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.806805 kernel: audit: type=1300 audit(1768353825.790:530): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff006a0f90 a2=0 a3=7fff006a0f7c items=0 ppid=3108 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.813418 kernel: audit: type=1327 audit(1768353825.790:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.813520 kernel: audit: type=1325 audit(1768353825.809:531): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.809000 audit[3357]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:45.809000 audit[3357]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff006a0f90 a2=0 a3=0 items=0 ppid=3108 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.824469 kernel: audit: type=1300 audit(1768353825.809:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff006a0f90 a2=0 a3=0 items=0 ppid=3108 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:45.824560 kernel: audit: type=1327 audit(1768353825.809:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:45.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:48.321771 systemd[1]: Created slice kubepods-besteffort-pode3154996_4894_4cd6_9a6e_d6e57530ab55.slice - libcontainer container kubepods-besteffort-pode3154996_4894_4cd6_9a6e_d6e57530ab55.slice. Jan 14 01:23:48.310000 audit[3360]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:48.352820 kernel: audit: type=1325 audit(1768353828.310:532): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:48.310000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc64946890 a2=0 a3=7ffc6494687c items=0 ppid=3108 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:48.349000 audit[3360]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3360 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:48.349000 audit[3360]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc64946890 a2=0 a3=0 items=0 ppid=3108 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.349000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:48.383939 kubelet[2962]: I0114 01:23:48.383706 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3154996-4894-4cd6-9a6e-d6e57530ab55-tigera-ca-bundle\") pod \"calico-typha-56f4576974-tqjts\" (UID: \"e3154996-4894-4cd6-9a6e-d6e57530ab55\") " pod="calico-system/calico-typha-56f4576974-tqjts" Jan 14 01:23:48.383939 kubelet[2962]: I0114 01:23:48.383834 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e3154996-4894-4cd6-9a6e-d6e57530ab55-typha-certs\") pod \"calico-typha-56f4576974-tqjts\" (UID: \"e3154996-4894-4cd6-9a6e-d6e57530ab55\") " pod="calico-system/calico-typha-56f4576974-tqjts" Jan 14 01:23:48.383939 kubelet[2962]: I0114 01:23:48.383881 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxmjg\" (UniqueName: \"kubernetes.io/projected/e3154996-4894-4cd6-9a6e-d6e57530ab55-kube-api-access-fxmjg\") pod \"calico-typha-56f4576974-tqjts\" (UID: \"e3154996-4894-4cd6-9a6e-d6e57530ab55\") " pod="calico-system/calico-typha-56f4576974-tqjts" Jan 14 01:23:48.406000 audit[3362]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:48.406000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffe48a51d70 a2=0 a3=7ffe48a51d5c items=0 ppid=3108 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:48.411000 audit[3362]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:48.411000 audit[3362]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe48a51d70 a2=0 a3=0 items=0 ppid=3108 pid=3362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:48.475846 systemd[1]: Created slice kubepods-besteffort-podca6ddf59_6611_4f81_8268_7c908d7aae27.slice - libcontainer container kubepods-besteffort-podca6ddf59_6611_4f81_8268_7c908d7aae27.slice. Jan 14 01:23:48.585124 kubelet[2962]: I0114 01:23:48.584963 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lst7r\" (UniqueName: \"kubernetes.io/projected/ca6ddf59-6611-4f81-8268-7c908d7aae27-kube-api-access-lst7r\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585124 kubelet[2962]: I0114 01:23:48.585030 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca6ddf59-6611-4f81-8268-7c908d7aae27-tigera-ca-bundle\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585124 kubelet[2962]: I0114 01:23:48.585060 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-var-lib-calico\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585124 kubelet[2962]: I0114 01:23:48.585087 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-var-run-calico\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585482 kubelet[2962]: I0114 01:23:48.585129 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-xtables-lock\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585482 kubelet[2962]: I0114 01:23:48.585161 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-cni-net-dir\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585482 kubelet[2962]: I0114 01:23:48.585191 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-cni-log-dir\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585482 kubelet[2962]: I0114 01:23:48.585218 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-flexvol-driver-host\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585482 kubelet[2962]: I0114 01:23:48.585244 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ca6ddf59-6611-4f81-8268-7c908d7aae27-node-certs\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585747 kubelet[2962]: I0114 01:23:48.585278 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-policysync\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585747 kubelet[2962]: I0114 01:23:48.585308 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-cni-bin-dir\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.585747 kubelet[2962]: I0114 01:23:48.585335 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca6ddf59-6611-4f81-8268-7c908d7aae27-lib-modules\") pod \"calico-node-52jzt\" (UID: \"ca6ddf59-6611-4f81-8268-7c908d7aae27\") " pod="calico-system/calico-node-52jzt" Jan 14 01:23:48.587438 kubelet[2962]: E0114 01:23:48.586762 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:48.589554 kubelet[2962]: I0114 01:23:48.589511 2962 status_manager.go:890] "Failed to get status for pod" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" pod="calico-system/csi-node-driver-vqp7q" err="pods \"csi-node-driver-vqp7q\" is forbidden: User \"system:node:srv-aufav.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-aufav.gb1.brightbox.com' and this object" Jan 14 01:23:48.639272 containerd[1645]: time="2026-01-14T01:23:48.638861110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56f4576974-tqjts,Uid:e3154996-4894-4cd6-9a6e-d6e57530ab55,Namespace:calico-system,Attempt:0,}" Jan 14 01:23:48.682078 containerd[1645]: time="2026-01-14T01:23:48.681950025Z" level=info msg="connecting to shim 6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239" address="unix:///run/containerd/s/bb7e5c5ad88e64f788379f5305be1820673f88f2641b30e0fb0b2b6050a5f549" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:48.687019 kubelet[2962]: I0114 01:23:48.686438 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2-kubelet-dir\") pod \"csi-node-driver-vqp7q\" (UID: \"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2\") " pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:23:48.687533 kubelet[2962]: I0114 01:23:48.687197 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nhm\" (UniqueName: \"kubernetes.io/projected/1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2-kube-api-access-b8nhm\") pod \"csi-node-driver-vqp7q\" (UID: \"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2\") " pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:23:48.688881 kubelet[2962]: I0114 01:23:48.688836 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2-registration-dir\") pod \"csi-node-driver-vqp7q\" (UID: \"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2\") " pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:23:48.688972 kubelet[2962]: I0114 01:23:48.688901 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2-socket-dir\") pod \"csi-node-driver-vqp7q\" (UID: \"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2\") " pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:23:48.688972 kubelet[2962]: I0114 01:23:48.688936 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2-varrun\") pod \"csi-node-driver-vqp7q\" (UID: \"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2\") " pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:23:48.708911 kubelet[2962]: E0114 01:23:48.708095 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.708911 kubelet[2962]: W0114 01:23:48.708275 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.709873 kubelet[2962]: E0114 01:23:48.709839 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.720104 kubelet[2962]: E0114 01:23:48.720068 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.720568 kubelet[2962]: W0114 01:23:48.720539 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.720687 kubelet[2962]: E0114 01:23:48.720651 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.722962 kubelet[2962]: E0114 01:23:48.722936 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.722962 kubelet[2962]: W0114 01:23:48.722958 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.723831 kubelet[2962]: E0114 01:23:48.722975 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.748119 systemd[1]: Started cri-containerd-6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239.scope - libcontainer container 6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239. Jan 14 01:23:48.769000 audit: BPF prog-id=155 op=LOAD Jan 14 01:23:48.770000 audit: BPF prog-id=156 op=LOAD Jan 14 01:23:48.770000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.770000 audit: BPF prog-id=156 op=UNLOAD Jan 14 01:23:48.770000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.770000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.771000 audit: BPF prog-id=157 op=LOAD Jan 14 01:23:48.771000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.771000 audit: BPF prog-id=158 op=LOAD Jan 14 01:23:48.771000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.771000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:23:48.771000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.771000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:23:48.771000 audit[3386]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.771000 audit: BPF prog-id=159 op=LOAD Jan 14 01:23:48.771000 audit[3386]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3373 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.771000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637393163323039646531623363343539336538303732313263303636 Jan 14 01:23:48.784090 containerd[1645]: time="2026-01-14T01:23:48.784029165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52jzt,Uid:ca6ddf59-6611-4f81-8268-7c908d7aae27,Namespace:calico-system,Attempt:0,}" Jan 14 01:23:48.790287 kubelet[2962]: E0114 01:23:48.790255 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.790436 kubelet[2962]: W0114 01:23:48.790406 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.790518 kubelet[2962]: E0114 01:23:48.790444 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.791252 kubelet[2962]: E0114 01:23:48.791227 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.791252 kubelet[2962]: W0114 01:23:48.791248 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.791385 kubelet[2962]: E0114 01:23:48.791272 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.791858 kubelet[2962]: E0114 01:23:48.791831 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.791858 kubelet[2962]: W0114 01:23:48.791856 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.792767 kubelet[2962]: E0114 01:23:48.792099 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.792767 kubelet[2962]: E0114 01:23:48.792290 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.792767 kubelet[2962]: W0114 01:23:48.792304 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.792767 kubelet[2962]: E0114 01:23:48.792320 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.793175 kubelet[2962]: E0114 01:23:48.793150 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.793175 kubelet[2962]: W0114 01:23:48.793171 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.793307 kubelet[2962]: E0114 01:23:48.793210 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.793612 kubelet[2962]: E0114 01:23:48.793582 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.793686 kubelet[2962]: W0114 01:23:48.793620 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.793686 kubelet[2962]: E0114 01:23:48.793651 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.793998 kubelet[2962]: E0114 01:23:48.793973 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.793998 kubelet[2962]: W0114 01:23:48.793994 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.794103 kubelet[2962]: E0114 01:23:48.794011 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.794432 kubelet[2962]: E0114 01:23:48.794407 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.794432 kubelet[2962]: W0114 01:23:48.794427 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.794540 kubelet[2962]: E0114 01:23:48.794489 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.801558 kubelet[2962]: E0114 01:23:48.801518 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.801558 kubelet[2962]: W0114 01:23:48.801542 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.801727 kubelet[2962]: E0114 01:23:48.801568 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.802023 kubelet[2962]: E0114 01:23:48.801976 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.802023 kubelet[2962]: W0114 01:23:48.801997 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.802170 kubelet[2962]: E0114 01:23:48.802031 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.802403 kubelet[2962]: E0114 01:23:48.802379 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.802476 kubelet[2962]: W0114 01:23:48.802400 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.802575 kubelet[2962]: E0114 01:23:48.802549 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.802933 kubelet[2962]: E0114 01:23:48.802909 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.802933 kubelet[2962]: W0114 01:23:48.802930 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.803928 kubelet[2962]: E0114 01:23:48.803898 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.804148 kubelet[2962]: E0114 01:23:48.804122 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.804148 kubelet[2962]: W0114 01:23:48.804144 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.804294 kubelet[2962]: E0114 01:23:48.804271 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.804535 kubelet[2962]: E0114 01:23:48.804510 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.804535 kubelet[2962]: W0114 01:23:48.804529 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.804656 kubelet[2962]: E0114 01:23:48.804621 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.804928 kubelet[2962]: E0114 01:23:48.804894 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.804928 kubelet[2962]: W0114 01:23:48.804915 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.805093 kubelet[2962]: E0114 01:23:48.805065 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.805525 kubelet[2962]: E0114 01:23:48.805500 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.805525 kubelet[2962]: W0114 01:23:48.805522 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.805941 kubelet[2962]: E0114 01:23:48.805833 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.806052 kubelet[2962]: E0114 01:23:48.806026 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.806052 kubelet[2962]: W0114 01:23:48.806048 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.806834 kubelet[2962]: E0114 01:23:48.806737 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.807107 kubelet[2962]: E0114 01:23:48.807052 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.807107 kubelet[2962]: W0114 01:23:48.807104 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.807228 kubelet[2962]: E0114 01:23:48.807214 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.807518 kubelet[2962]: E0114 01:23:48.807492 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.807518 kubelet[2962]: W0114 01:23:48.807513 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.807647 kubelet[2962]: E0114 01:23:48.807618 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.807913 kubelet[2962]: E0114 01:23:48.807888 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.807913 kubelet[2962]: W0114 01:23:48.807908 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.808069 kubelet[2962]: E0114 01:23:48.808046 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.808274 kubelet[2962]: E0114 01:23:48.808249 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.808274 kubelet[2962]: W0114 01:23:48.808270 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.808627 kubelet[2962]: E0114 01:23:48.808598 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.808999 kubelet[2962]: E0114 01:23:48.808973 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.808999 kubelet[2962]: W0114 01:23:48.808995 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.809141 kubelet[2962]: E0114 01:23:48.809118 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.809917 kubelet[2962]: E0114 01:23:48.809893 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.809917 kubelet[2962]: W0114 01:23:48.809914 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.810030 kubelet[2962]: E0114 01:23:48.809938 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.810719 kubelet[2962]: E0114 01:23:48.810693 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.810719 kubelet[2962]: W0114 01:23:48.810715 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.811397 kubelet[2962]: E0114 01:23:48.810861 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.811817 kubelet[2962]: E0114 01:23:48.811730 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.811897 kubelet[2962]: W0114 01:23:48.811843 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.811897 kubelet[2962]: E0114 01:23:48.811876 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.844222 kubelet[2962]: E0114 01:23:48.844031 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:48.844222 kubelet[2962]: W0114 01:23:48.844061 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:48.844222 kubelet[2962]: E0114 01:23:48.844090 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:48.848694 containerd[1645]: time="2026-01-14T01:23:48.848628126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56f4576974-tqjts,Uid:e3154996-4894-4cd6-9a6e-d6e57530ab55,Namespace:calico-system,Attempt:0,} returns sandbox id \"6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239\"" Jan 14 01:23:48.854535 containerd[1645]: time="2026-01-14T01:23:48.854441348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:23:48.867517 containerd[1645]: time="2026-01-14T01:23:48.867073683Z" level=info msg="connecting to shim 0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621" address="unix:///run/containerd/s/491f016c9b97042d45b65a6466ca6a19927257dfca953cb1aeb5137ce6ecd354" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:23:48.896078 systemd[1]: Started cri-containerd-0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621.scope - libcontainer container 0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621. Jan 14 01:23:48.921000 audit: BPF prog-id=160 op=LOAD Jan 14 01:23:48.923000 audit: BPF prog-id=161 op=LOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=161 op=UNLOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=162 op=LOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=163 op=LOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.923000 audit: BPF prog-id=164 op=LOAD Jan 14 01:23:48.923000 audit[3463]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=3452 pid=3463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:48.923000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063663831303365373332636130376564646437393564653465306663 Jan 14 01:23:48.957065 containerd[1645]: time="2026-01-14T01:23:48.957014229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-52jzt,Uid:ca6ddf59-6611-4f81-8268-7c908d7aae27,Namespace:calico-system,Attempt:0,} returns sandbox id \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\"" Jan 14 01:23:49.426000 audit[3489]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:49.426000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc0b3d9430 a2=0 a3=7ffc0b3d941c items=0 ppid=3108 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:49.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:49.433000 audit[3489]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3489 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:23:49.433000 audit[3489]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc0b3d9430 a2=0 a3=0 items=0 ppid=3108 pid=3489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:49.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:23:49.794963 kubelet[2962]: E0114 01:23:49.794282 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:50.398598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462039603.mount: Deactivated successfully. Jan 14 01:23:51.795808 kubelet[2962]: E0114 01:23:51.795235 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:52.372894 containerd[1645]: time="2026-01-14T01:23:52.372771211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:52.375210 containerd[1645]: time="2026-01-14T01:23:52.375148707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 01:23:52.383827 containerd[1645]: time="2026-01-14T01:23:52.383764726Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:52.391489 containerd[1645]: time="2026-01-14T01:23:52.390762515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:52.391812 containerd[1645]: time="2026-01-14T01:23:52.391449340Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.536932206s" Jan 14 01:23:52.391985 containerd[1645]: time="2026-01-14T01:23:52.391949443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 01:23:52.394140 containerd[1645]: time="2026-01-14T01:23:52.394081444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:23:52.423625 containerd[1645]: time="2026-01-14T01:23:52.423147521Z" level=info msg="CreateContainer within sandbox \"6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:23:52.447849 containerd[1645]: time="2026-01-14T01:23:52.446915665Z" level=info msg="Container 19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:52.458647 containerd[1645]: time="2026-01-14T01:23:52.458457716Z" level=info msg="CreateContainer within sandbox \"6791c209de1b3c4593e807212c066a9b40778168021624be978278038bb1e239\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10\"" Jan 14 01:23:52.459703 containerd[1645]: time="2026-01-14T01:23:52.459671463Z" level=info msg="StartContainer for \"19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10\"" Jan 14 01:23:52.462632 containerd[1645]: time="2026-01-14T01:23:52.462580188Z" level=info msg="connecting to shim 19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10" address="unix:///run/containerd/s/bb7e5c5ad88e64f788379f5305be1820673f88f2641b30e0fb0b2b6050a5f549" protocol=ttrpc version=3 Jan 14 01:23:52.542377 systemd[1]: Started cri-containerd-19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10.scope - libcontainer container 19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10. Jan 14 01:23:52.583000 audit: BPF prog-id=165 op=LOAD Jan 14 01:23:52.589983 kernel: kauditd_printk_skb: 61 callbacks suppressed Jan 14 01:23:52.590123 kernel: audit: type=1334 audit(1768353832.583:554): prog-id=165 op=LOAD Jan 14 01:23:52.592000 audit: BPF prog-id=166 op=LOAD Jan 14 01:23:52.595813 kernel: audit: type=1334 audit(1768353832.592:555): prog-id=166 op=LOAD Jan 14 01:23:52.592000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.603390 kernel: audit: type=1300 audit(1768353832.592:555): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.603596 kernel: audit: type=1327 audit(1768353832.592:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.592000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:23:52.607184 kernel: audit: type=1334 audit(1768353832.592:556): prog-id=166 op=UNLOAD Jan 14 01:23:52.592000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.609916 kernel: audit: type=1300 audit(1768353832.592:556): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.615001 kernel: audit: type=1327 audit(1768353832.592:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.594000 audit: BPF prog-id=167 op=LOAD Jan 14 01:23:52.619597 kernel: audit: type=1334 audit(1768353832.594:557): prog-id=167 op=LOAD Jan 14 01:23:52.619686 kernel: audit: type=1300 audit(1768353832.594:557): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.594000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.626507 kernel: audit: type=1327 audit(1768353832.594:557): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.594000 audit: BPF prog-id=168 op=LOAD Jan 14 01:23:52.594000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.594000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:23:52.594000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.594000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:23:52.594000 audit[3500]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.594000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.595000 audit: BPF prog-id=169 op=LOAD Jan 14 01:23:52.595000 audit[3500]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3373 pid=3500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:52.595000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139616233663732353335303566363837386132643631366537346637 Jan 14 01:23:52.678873 containerd[1645]: time="2026-01-14T01:23:52.678210227Z" level=info msg="StartContainer for \"19ab3f7253505f6878a2d616e74f71e2cb3ca77facc1145793021da4df3b4f10\" returns successfully" Jan 14 01:23:52.992967 kubelet[2962]: E0114 01:23:52.992885 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:52.992967 kubelet[2962]: W0114 01:23:52.992949 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:52.997297 kubelet[2962]: E0114 01:23:52.996968 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:52.997414 kubelet[2962]: E0114 01:23:52.997369 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:52.997414 kubelet[2962]: W0114 01:23:52.997409 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:52.997533 kubelet[2962]: E0114 01:23:52.997428 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:52.998071 kubelet[2962]: E0114 01:23:52.998023 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:52.998152 kubelet[2962]: W0114 01:23:52.998110 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:52.998152 kubelet[2962]: E0114 01:23:52.998132 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.000263 kubelet[2962]: E0114 01:23:53.000231 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.000356 kubelet[2962]: W0114 01:23:53.000288 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.000356 kubelet[2962]: E0114 01:23:53.000309 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.000738 kubelet[2962]: E0114 01:23:53.000702 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.000738 kubelet[2962]: W0114 01:23:53.000725 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.000934 kubelet[2962]: E0114 01:23:53.000742 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001035 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.002820 kubelet[2962]: W0114 01:23:53.001067 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001082 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001370 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.002820 kubelet[2962]: W0114 01:23:53.001385 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001399 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001645 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.002820 kubelet[2962]: W0114 01:23:53.001658 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.001672 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.002820 kubelet[2962]: E0114 01:23:53.002071 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.003310 kubelet[2962]: W0114 01:23:53.002085 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.003310 kubelet[2962]: E0114 01:23:53.002109 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.003310 kubelet[2962]: E0114 01:23:53.003006 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.003310 kubelet[2962]: W0114 01:23:53.003023 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.003310 kubelet[2962]: E0114 01:23:53.003051 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.003581 kubelet[2962]: E0114 01:23:53.003325 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.003581 kubelet[2962]: W0114 01:23:53.003340 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.003581 kubelet[2962]: E0114 01:23:53.003355 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.003726 kubelet[2962]: E0114 01:23:53.003635 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.003726 kubelet[2962]: W0114 01:23:53.003649 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.003726 kubelet[2962]: E0114 01:23:53.003663 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.003956 kubelet[2962]: E0114 01:23:53.003932 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.003956 kubelet[2962]: W0114 01:23:53.003952 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.004078 kubelet[2962]: E0114 01:23:53.003968 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.004249 kubelet[2962]: E0114 01:23:53.004227 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.004249 kubelet[2962]: W0114 01:23:53.004247 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.004344 kubelet[2962]: E0114 01:23:53.004262 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.004634 kubelet[2962]: E0114 01:23:53.004607 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.004634 kubelet[2962]: W0114 01:23:53.004627 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.004735 kubelet[2962]: E0114 01:23:53.004653 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.047646 kubelet[2962]: E0114 01:23:53.047584 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.047646 kubelet[2962]: W0114 01:23:53.047628 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.047902 kubelet[2962]: E0114 01:23:53.047681 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.048763 kubelet[2962]: E0114 01:23:53.048739 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.048763 kubelet[2962]: W0114 01:23:53.048760 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.048905 kubelet[2962]: E0114 01:23:53.048822 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.049150 kubelet[2962]: E0114 01:23:53.049121 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.049230 kubelet[2962]: W0114 01:23:53.049153 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.049230 kubelet[2962]: E0114 01:23:53.049172 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.050873 kubelet[2962]: E0114 01:23:53.050842 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.050873 kubelet[2962]: W0114 01:23:53.050865 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.051073 kubelet[2962]: E0114 01:23:53.050928 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.051341 kubelet[2962]: E0114 01:23:53.051267 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.051341 kubelet[2962]: W0114 01:23:53.051309 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.051565 kubelet[2962]: E0114 01:23:53.051506 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.051640 kubelet[2962]: E0114 01:23:53.051597 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.051640 kubelet[2962]: W0114 01:23:53.051611 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.051741 kubelet[2962]: E0114 01:23:53.051717 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.052175 kubelet[2962]: E0114 01:23:53.052132 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.052175 kubelet[2962]: W0114 01:23:53.052174 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.052383 kubelet[2962]: E0114 01:23:53.052211 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.052593 kubelet[2962]: E0114 01:23:53.052568 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.052676 kubelet[2962]: W0114 01:23:53.052609 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.052676 kubelet[2962]: E0114 01:23:53.052636 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.053413 kubelet[2962]: E0114 01:23:53.053369 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.053413 kubelet[2962]: W0114 01:23:53.053409 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.053607 kubelet[2962]: E0114 01:23:53.053427 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.053758 kubelet[2962]: E0114 01:23:53.053738 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.053758 kubelet[2962]: W0114 01:23:53.053756 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.053883 kubelet[2962]: E0114 01:23:53.053779 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.054200 kubelet[2962]: E0114 01:23:53.054174 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.054200 kubelet[2962]: W0114 01:23:53.054195 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.054923 kubelet[2962]: E0114 01:23:53.054893 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.055182 kubelet[2962]: E0114 01:23:53.055161 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.055182 kubelet[2962]: W0114 01:23:53.055180 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.055289 kubelet[2962]: E0114 01:23:53.055215 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.055495 kubelet[2962]: E0114 01:23:53.055465 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.055495 kubelet[2962]: W0114 01:23:53.055485 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.055835 kubelet[2962]: E0114 01:23:53.055516 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.055835 kubelet[2962]: E0114 01:23:53.055773 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.055835 kubelet[2962]: W0114 01:23:53.055814 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.056300 kubelet[2962]: E0114 01:23:53.055837 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.056300 kubelet[2962]: E0114 01:23:53.056134 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.056300 kubelet[2962]: W0114 01:23:53.056147 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.056300 kubelet[2962]: E0114 01:23:53.056171 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.058741 kubelet[2962]: E0114 01:23:53.057937 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.058741 kubelet[2962]: W0114 01:23:53.057961 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.058741 kubelet[2962]: E0114 01:23:53.057980 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.059216 kubelet[2962]: E0114 01:23:53.059066 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.060805 kubelet[2962]: W0114 01:23:53.059390 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.060943 kubelet[2962]: E0114 01:23:53.060915 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.061931 kubelet[2962]: E0114 01:23:53.061900 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:53.061931 kubelet[2962]: W0114 01:23:53.061924 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:53.062030 kubelet[2962]: E0114 01:23:53.061941 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:53.794661 kubelet[2962]: E0114 01:23:53.794134 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:53.926832 kubelet[2962]: I0114 01:23:53.926632 2962 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:23:54.013126 kubelet[2962]: E0114 01:23:54.013058 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.013126 kubelet[2962]: W0114 01:23:54.013116 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.013882 kubelet[2962]: E0114 01:23:54.013152 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.015229 kubelet[2962]: E0114 01:23:54.015205 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.015229 kubelet[2962]: W0114 01:23:54.015229 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.015385 kubelet[2962]: E0114 01:23:54.015246 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.015674 kubelet[2962]: E0114 01:23:54.015648 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.015674 kubelet[2962]: W0114 01:23:54.015668 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.015918 kubelet[2962]: E0114 01:23:54.015684 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.016435 kubelet[2962]: E0114 01:23:54.016397 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.016435 kubelet[2962]: W0114 01:23:54.016420 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.016890 kubelet[2962]: E0114 01:23:54.016437 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.017192 kubelet[2962]: E0114 01:23:54.017021 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.017192 kubelet[2962]: W0114 01:23:54.017036 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.017192 kubelet[2962]: E0114 01:23:54.017052 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.017664 kubelet[2962]: E0114 01:23:54.017463 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.017664 kubelet[2962]: W0114 01:23:54.017478 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.017664 kubelet[2962]: E0114 01:23:54.017493 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.017945 kubelet[2962]: E0114 01:23:54.017901 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.017945 kubelet[2962]: W0114 01:23:54.017917 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.017945 kubelet[2962]: E0114 01:23:54.017935 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.018285 kubelet[2962]: E0114 01:23:54.018235 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.018285 kubelet[2962]: W0114 01:23:54.018257 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.018285 kubelet[2962]: E0114 01:23:54.018273 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.018560 kubelet[2962]: E0114 01:23:54.018539 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.018655 kubelet[2962]: W0114 01:23:54.018560 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.018655 kubelet[2962]: E0114 01:23:54.018576 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.018875 kubelet[2962]: E0114 01:23:54.018855 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.018875 kubelet[2962]: W0114 01:23:54.018875 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.019035 kubelet[2962]: E0114 01:23:54.018891 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.019168 kubelet[2962]: E0114 01:23:54.019148 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.019219 kubelet[2962]: W0114 01:23:54.019168 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.019219 kubelet[2962]: E0114 01:23:54.019184 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.019459 kubelet[2962]: E0114 01:23:54.019431 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.019459 kubelet[2962]: W0114 01:23:54.019451 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.019561 kubelet[2962]: E0114 01:23:54.019466 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.019839 kubelet[2962]: E0114 01:23:54.019804 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.020307 kubelet[2962]: W0114 01:23:54.019824 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.020377 kubelet[2962]: E0114 01:23:54.020315 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.020591 kubelet[2962]: E0114 01:23:54.020572 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.020670 kubelet[2962]: W0114 01:23:54.020593 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.020670 kubelet[2962]: E0114 01:23:54.020607 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.020920 kubelet[2962]: E0114 01:23:54.020879 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.020920 kubelet[2962]: W0114 01:23:54.020899 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.020920 kubelet[2962]: E0114 01:23:54.020914 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.061005 kubelet[2962]: E0114 01:23:54.060701 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.061005 kubelet[2962]: W0114 01:23:54.060743 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.061005 kubelet[2962]: E0114 01:23:54.060804 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.062417 kubelet[2962]: E0114 01:23:54.062393 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.062417 kubelet[2962]: W0114 01:23:54.062414 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.062670 kubelet[2962]: E0114 01:23:54.062453 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.062989 kubelet[2962]: E0114 01:23:54.062768 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.062989 kubelet[2962]: W0114 01:23:54.062803 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.062989 kubelet[2962]: E0114 01:23:54.062899 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.063449 kubelet[2962]: E0114 01:23:54.063249 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.063449 kubelet[2962]: W0114 01:23:54.063268 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.063449 kubelet[2962]: E0114 01:23:54.063364 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.064189 kubelet[2962]: E0114 01:23:54.064160 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.064189 kubelet[2962]: W0114 01:23:54.064180 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.064364 kubelet[2962]: E0114 01:23:54.064307 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.065523 kubelet[2962]: E0114 01:23:54.065499 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.065523 kubelet[2962]: W0114 01:23:54.065519 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.065957 kubelet[2962]: E0114 01:23:54.065881 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.066069 kubelet[2962]: E0114 01:23:54.065998 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.066069 kubelet[2962]: W0114 01:23:54.066056 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.066324 kubelet[2962]: E0114 01:23:54.066109 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.066324 kubelet[2962]: E0114 01:23:54.066317 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.066411 kubelet[2962]: W0114 01:23:54.066331 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.066470 kubelet[2962]: E0114 01:23:54.066424 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.066634 kubelet[2962]: E0114 01:23:54.066614 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.066634 kubelet[2962]: W0114 01:23:54.066633 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.066754 kubelet[2962]: E0114 01:23:54.066666 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.067195 kubelet[2962]: E0114 01:23:54.067173 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.067195 kubelet[2962]: W0114 01:23:54.067193 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.067304 kubelet[2962]: E0114 01:23:54.067228 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.068025 kubelet[2962]: E0114 01:23:54.067779 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.068025 kubelet[2962]: W0114 01:23:54.067848 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.068025 kubelet[2962]: E0114 01:23:54.067871 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.068025 kubelet[2962]: E0114 01:23:54.068120 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.068025 kubelet[2962]: W0114 01:23:54.068133 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.068025 kubelet[2962]: E0114 01:23:54.068149 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.069807 kubelet[2962]: E0114 01:23:54.069327 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.069807 kubelet[2962]: W0114 01:23:54.069346 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.069807 kubelet[2962]: E0114 01:23:54.069440 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.072844 kubelet[2962]: E0114 01:23:54.071181 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.073149 kubelet[2962]: W0114 01:23:54.073031 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.073223 kubelet[2962]: E0114 01:23:54.073168 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.073674 kubelet[2962]: E0114 01:23:54.073579 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.073674 kubelet[2962]: W0114 01:23:54.073600 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.073674 kubelet[2962]: E0114 01:23:54.073646 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.074288 kubelet[2962]: E0114 01:23:54.074078 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.074288 kubelet[2962]: W0114 01:23:54.074126 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.074288 kubelet[2962]: E0114 01:23:54.074173 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.074570 kubelet[2962]: E0114 01:23:54.074550 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.074672 kubelet[2962]: W0114 01:23:54.074651 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.074802 kubelet[2962]: E0114 01:23:54.074766 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.075490 kubelet[2962]: E0114 01:23:54.075470 2962 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:23:54.075621 kubelet[2962]: W0114 01:23:54.075570 2962 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:23:54.075828 kubelet[2962]: E0114 01:23:54.075721 2962 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:23:54.138433 containerd[1645]: time="2026-01-14T01:23:54.138067629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:54.139956 containerd[1645]: time="2026-01-14T01:23:54.139634306Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 01:23:54.141510 containerd[1645]: time="2026-01-14T01:23:54.141467990Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:54.145252 containerd[1645]: time="2026-01-14T01:23:54.145214183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:23:54.146494 containerd[1645]: time="2026-01-14T01:23:54.146450863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.75231395s" Jan 14 01:23:54.146593 containerd[1645]: time="2026-01-14T01:23:54.146498463Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 01:23:54.150227 containerd[1645]: time="2026-01-14T01:23:54.150180086Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:23:54.167217 containerd[1645]: time="2026-01-14T01:23:54.165041950Z" level=info msg="Container 6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:23:54.191691 containerd[1645]: time="2026-01-14T01:23:54.191390475Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a\"" Jan 14 01:23:54.194658 containerd[1645]: time="2026-01-14T01:23:54.194600942Z" level=info msg="StartContainer for \"6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a\"" Jan 14 01:23:54.199680 containerd[1645]: time="2026-01-14T01:23:54.199601142Z" level=info msg="connecting to shim 6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a" address="unix:///run/containerd/s/491f016c9b97042d45b65a6466ca6a19927257dfca953cb1aeb5137ce6ecd354" protocol=ttrpc version=3 Jan 14 01:23:54.247118 systemd[1]: Started cri-containerd-6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a.scope - libcontainer container 6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a. Jan 14 01:23:54.329000 audit: BPF prog-id=170 op=LOAD Jan 14 01:23:54.329000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=3452 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:54.329000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346465363930623630326265393631336561656336633436376339 Jan 14 01:23:54.330000 audit: BPF prog-id=171 op=LOAD Jan 14 01:23:54.330000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=3452 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:54.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346465363930623630326265393631336561656336633436376339 Jan 14 01:23:54.330000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:23:54.330000 audit[3614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:54.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346465363930623630326265393631336561656336633436376339 Jan 14 01:23:54.330000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:23:54.330000 audit[3614]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:54.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346465363930623630326265393631336561656336633436376339 Jan 14 01:23:54.330000 audit: BPF prog-id=172 op=LOAD Jan 14 01:23:54.330000 audit[3614]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=3452 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:23:54.330000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661346465363930623630326265393631336561656336633436376339 Jan 14 01:23:54.381457 containerd[1645]: time="2026-01-14T01:23:54.381348881Z" level=info msg="StartContainer for \"6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a\" returns successfully" Jan 14 01:23:54.386852 systemd[1]: cri-containerd-6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a.scope: Deactivated successfully. Jan 14 01:23:54.392000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:23:54.480876 containerd[1645]: time="2026-01-14T01:23:54.480738434Z" level=info msg="received container exit event container_id:\"6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a\" id:\"6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a\" pid:3627 exited_at:{seconds:1768353834 nanos:394513840}" Jan 14 01:23:54.528719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a4de690b602be9613eaec6c467c94703dbcbc452449232e5712629bf4032d0a-rootfs.mount: Deactivated successfully. Jan 14 01:23:54.943089 containerd[1645]: time="2026-01-14T01:23:54.943011548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:23:54.969347 kubelet[2962]: I0114 01:23:54.968414 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56f4576974-tqjts" podStartSLOduration=3.425548462 podStartE2EDuration="6.967457184s" podCreationTimestamp="2026-01-14 01:23:48 +0000 UTC" firstStartedPulling="2026-01-14 01:23:48.851902028 +0000 UTC m=+23.391760013" lastFinishedPulling="2026-01-14 01:23:52.393810737 +0000 UTC m=+26.933668735" observedRunningTime="2026-01-14 01:23:53.087111904 +0000 UTC m=+27.626969902" watchObservedRunningTime="2026-01-14 01:23:54.967457184 +0000 UTC m=+29.507315186" Jan 14 01:23:55.795730 kubelet[2962]: E0114 01:23:55.795603 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:57.797065 kubelet[2962]: E0114 01:23:57.797001 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:23:59.793933 kubelet[2962]: E0114 01:23:59.793811 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:01.794120 kubelet[2962]: E0114 01:24:01.794027 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:02.852000 audit[3671]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:02.859065 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 01:24:02.859165 kernel: audit: type=1325 audit(1768353842.852:568): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:02.852000 audit[3671]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2df43900 a2=0 a3=7ffc2df438ec items=0 ppid=3108 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:02.868812 kernel: audit: type=1300 audit(1768353842.852:568): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc2df43900 a2=0 a3=7ffc2df438ec items=0 ppid=3108 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:02.852000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:02.872842 kernel: audit: type=1327 audit(1768353842.852:568): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:02.869000 audit[3671]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:02.877892 kernel: audit: type=1325 audit(1768353842.869:569): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3671 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:02.869000 audit[3671]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2df43900 a2=0 a3=7ffc2df438ec items=0 ppid=3108 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:02.869000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:02.885461 kernel: audit: type=1300 audit(1768353842.869:569): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc2df43900 a2=0 a3=7ffc2df438ec items=0 ppid=3108 pid=3671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:02.885539 kernel: audit: type=1327 audit(1768353842.869:569): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:03.795221 kubelet[2962]: E0114 01:24:03.795103 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:03.912806 containerd[1645]: time="2026-01-14T01:24:03.912700769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:03.914269 containerd[1645]: time="2026-01-14T01:24:03.914024129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 01:24:03.915051 containerd[1645]: time="2026-01-14T01:24:03.915008641Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:03.920858 containerd[1645]: time="2026-01-14T01:24:03.920801952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:03.922316 containerd[1645]: time="2026-01-14T01:24:03.922235772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 8.97916159s" Jan 14 01:24:03.922316 containerd[1645]: time="2026-01-14T01:24:03.922280793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 01:24:03.927879 containerd[1645]: time="2026-01-14T01:24:03.927253008Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:24:03.942055 containerd[1645]: time="2026-01-14T01:24:03.940339799Z" level=info msg="Container f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:03.953573 containerd[1645]: time="2026-01-14T01:24:03.953477717Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47\"" Jan 14 01:24:03.955001 containerd[1645]: time="2026-01-14T01:24:03.954741201Z" level=info msg="StartContainer for \"f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47\"" Jan 14 01:24:03.957874 containerd[1645]: time="2026-01-14T01:24:03.957837731Z" level=info msg="connecting to shim f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47" address="unix:///run/containerd/s/491f016c9b97042d45b65a6466ca6a19927257dfca953cb1aeb5137ce6ecd354" protocol=ttrpc version=3 Jan 14 01:24:04.040573 systemd[1]: Started cri-containerd-f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47.scope - libcontainer container f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47. Jan 14 01:24:04.113000 audit: BPF prog-id=173 op=LOAD Jan 14 01:24:04.120586 kernel: audit: type=1334 audit(1768353844.113:570): prog-id=173 op=LOAD Jan 14 01:24:04.113000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.126823 kernel: audit: type=1300 audit(1768353844.113:570): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.133804 kernel: audit: type=1327 audit(1768353844.113:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.119000 audit: BPF prog-id=174 op=LOAD Jan 14 01:24:04.119000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.119000 audit: BPF prog-id=174 op=UNLOAD Jan 14 01:24:04.119000 audit[3676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.119000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:24:04.119000 audit[3676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.119000 audit: BPF prog-id=175 op=LOAD Jan 14 01:24:04.136939 kernel: audit: type=1334 audit(1768353844.119:571): prog-id=174 op=LOAD Jan 14 01:24:04.119000 audit[3676]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3452 pid=3676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:04.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636313635623430633636396131393330653232393930343836343133 Jan 14 01:24:04.167458 containerd[1645]: time="2026-01-14T01:24:04.167295106Z" level=info msg="StartContainer for \"f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47\" returns successfully" Jan 14 01:24:05.107015 systemd[1]: cri-containerd-f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47.scope: Deactivated successfully. Jan 14 01:24:05.107506 systemd[1]: cri-containerd-f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47.scope: Consumed 763ms CPU time, 160M memory peak, 7M read from disk, 171.3M written to disk. Jan 14 01:24:05.109000 audit: BPF prog-id=175 op=UNLOAD Jan 14 01:24:05.129484 containerd[1645]: time="2026-01-14T01:24:05.129415856Z" level=info msg="received container exit event container_id:\"f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47\" id:\"f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47\" pid:3689 exited_at:{seconds:1768353845 nanos:128088816}" Jan 14 01:24:05.169890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6165b40c669a1930e229904864133d0993834403397e2c3bcf69a743a4d9a47-rootfs.mount: Deactivated successfully. Jan 14 01:24:05.188999 kubelet[2962]: I0114 01:24:05.188958 2962 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:24:05.308262 systemd[1]: Created slice kubepods-burstable-pod26746a71_3f83_4a0a_9dda_605fbba4fc61.slice - libcontainer container kubepods-burstable-pod26746a71_3f83_4a0a_9dda_605fbba4fc61.slice. Jan 14 01:24:05.320805 kubelet[2962]: I0114 01:24:05.320579 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/611c348f-b209-4156-bf37-8d53c837267b-calico-apiserver-certs\") pod \"calico-apiserver-84b8b5c58c-hcr5n\" (UID: \"611c348f-b209-4156-bf37-8d53c837267b\") " pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:05.326813 kubelet[2962]: I0114 01:24:05.326632 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xh6s\" (UniqueName: \"kubernetes.io/projected/611c348f-b209-4156-bf37-8d53c837267b-kube-api-access-7xh6s\") pod \"calico-apiserver-84b8b5c58c-hcr5n\" (UID: \"611c348f-b209-4156-bf37-8d53c837267b\") " pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:05.329154 kubelet[2962]: I0114 01:24:05.328986 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26746a71-3f83-4a0a-9dda-605fbba4fc61-config-volume\") pod \"coredns-668d6bf9bc-p8d54\" (UID: \"26746a71-3f83-4a0a-9dda-605fbba4fc61\") " pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:05.329513 kubelet[2962]: I0114 01:24:05.329340 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq28b\" (UniqueName: \"kubernetes.io/projected/26746a71-3f83-4a0a-9dda-605fbba4fc61-kube-api-access-qq28b\") pod \"coredns-668d6bf9bc-p8d54\" (UID: \"26746a71-3f83-4a0a-9dda-605fbba4fc61\") " pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:05.336574 systemd[1]: Created slice kubepods-besteffort-pod611c348f_b209_4156_bf37_8d53c837267b.slice - libcontainer container kubepods-besteffort-pod611c348f_b209_4156_bf37_8d53c837267b.slice. Jan 14 01:24:05.364724 systemd[1]: Created slice kubepods-besteffort-pod3be20087_8a42_4c3d_9995_444849dfca4c.slice - libcontainer container kubepods-besteffort-pod3be20087_8a42_4c3d_9995_444849dfca4c.slice. Jan 14 01:24:05.384056 systemd[1]: Created slice kubepods-burstable-pod6e870825_3e39_4a0f_8778_2b70ab8b554e.slice - libcontainer container kubepods-burstable-pod6e870825_3e39_4a0f_8778_2b70ab8b554e.slice. Jan 14 01:24:05.397054 systemd[1]: Created slice kubepods-besteffort-pod4b48dc0c_ba9e_44a4_9e2b_58fe2a3b2fa1.slice - libcontainer container kubepods-besteffort-pod4b48dc0c_ba9e_44a4_9e2b_58fe2a3b2fa1.slice. Jan 14 01:24:05.411733 systemd[1]: Created slice kubepods-besteffort-pod6af88ae8_33db_47f6_8963_68a47a1d9783.slice - libcontainer container kubepods-besteffort-pod6af88ae8_33db_47f6_8963_68a47a1d9783.slice. Jan 14 01:24:05.421818 systemd[1]: Created slice kubepods-besteffort-pod56181817_3b69_45cb_ad6c_2ef729a912ab.slice - libcontainer container kubepods-besteffort-pod56181817_3b69_45cb_ad6c_2ef729a912ab.slice. Jan 14 01:24:05.430944 kubelet[2962]: I0114 01:24:05.430695 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-ca-bundle\") pod \"whisker-5d87db9b87-z6lnj\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:05.431321 kubelet[2962]: I0114 01:24:05.431167 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-backend-key-pair\") pod \"whisker-5d87db9b87-z6lnj\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:05.431749 kubelet[2962]: I0114 01:24:05.431445 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e870825-3e39-4a0f-8778-2b70ab8b554e-config-volume\") pod \"coredns-668d6bf9bc-55jfn\" (UID: \"6e870825-3e39-4a0f-8778-2b70ab8b554e\") " pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:05.431749 kubelet[2962]: I0114 01:24:05.431708 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h99lc\" (UniqueName: \"kubernetes.io/projected/6e870825-3e39-4a0f-8778-2b70ab8b554e-kube-api-access-h99lc\") pod \"coredns-668d6bf9bc-55jfn\" (UID: \"6e870825-3e39-4a0f-8778-2b70ab8b554e\") " pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:05.432105 kubelet[2962]: I0114 01:24:05.431966 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mvdd\" (UniqueName: \"kubernetes.io/projected/3be20087-8a42-4c3d-9995-444849dfca4c-kube-api-access-5mvdd\") pod \"whisker-5d87db9b87-z6lnj\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:05.432255 kubelet[2962]: I0114 01:24:05.432229 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jrl\" (UniqueName: \"kubernetes.io/projected/6af88ae8-33db-47f6-8963-68a47a1d9783-kube-api-access-96jrl\") pod \"calico-kube-controllers-85cd4c88bf-rxm94\" (UID: \"6af88ae8-33db-47f6-8963-68a47a1d9783\") " pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:05.433243 kubelet[2962]: I0114 01:24:05.432867 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthtl\" (UniqueName: \"kubernetes.io/projected/4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1-kube-api-access-sthtl\") pod \"calico-apiserver-84b8b5c58c-zdm6z\" (UID: \"4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1\") " pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:05.433243 kubelet[2962]: I0114 01:24:05.432916 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/56181817-3b69-45cb-ad6c-2ef729a912ab-goldmane-key-pair\") pod \"goldmane-666569f655-7pn2c\" (UID: \"56181817-3b69-45cb-ad6c-2ef729a912ab\") " pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:05.433243 kubelet[2962]: I0114 01:24:05.432956 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6af88ae8-33db-47f6-8963-68a47a1d9783-tigera-ca-bundle\") pod \"calico-kube-controllers-85cd4c88bf-rxm94\" (UID: \"6af88ae8-33db-47f6-8963-68a47a1d9783\") " pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:05.433243 kubelet[2962]: I0114 01:24:05.433013 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1-calico-apiserver-certs\") pod \"calico-apiserver-84b8b5c58c-zdm6z\" (UID: \"4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1\") " pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:05.433243 kubelet[2962]: I0114 01:24:05.433062 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56181817-3b69-45cb-ad6c-2ef729a912ab-config\") pod \"goldmane-666569f655-7pn2c\" (UID: \"56181817-3b69-45cb-ad6c-2ef729a912ab\") " pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:05.433613 kubelet[2962]: I0114 01:24:05.433105 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56181817-3b69-45cb-ad6c-2ef729a912ab-goldmane-ca-bundle\") pod \"goldmane-666569f655-7pn2c\" (UID: \"56181817-3b69-45cb-ad6c-2ef729a912ab\") " pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:05.433613 kubelet[2962]: I0114 01:24:05.433144 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk45d\" (UniqueName: \"kubernetes.io/projected/56181817-3b69-45cb-ad6c-2ef729a912ab-kube-api-access-rk45d\") pod \"goldmane-666569f655-7pn2c\" (UID: \"56181817-3b69-45cb-ad6c-2ef729a912ab\") " pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:05.625301 containerd[1645]: time="2026-01-14T01:24:05.625149396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:05.659329 containerd[1645]: time="2026-01-14T01:24:05.659273102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:05.677643 containerd[1645]: time="2026-01-14T01:24:05.677555595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d87db9b87-z6lnj,Uid:3be20087-8a42-4c3d-9995-444849dfca4c,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:05.691622 containerd[1645]: time="2026-01-14T01:24:05.691552342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:05.711273 containerd[1645]: time="2026-01-14T01:24:05.710055568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:05.721387 containerd[1645]: time="2026-01-14T01:24:05.721314308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:05.727822 containerd[1645]: time="2026-01-14T01:24:05.727757567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:05.819173 systemd[1]: Created slice kubepods-besteffort-pod1ef4f5be_e30b_4c3d_8994_2cd80d70e4b2.slice - libcontainer container kubepods-besteffort-pod1ef4f5be_e30b_4c3d_8994_2cd80d70e4b2.slice. Jan 14 01:24:05.826698 containerd[1645]: time="2026-01-14T01:24:05.826429962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:06.047905 containerd[1645]: time="2026-01-14T01:24:06.047818817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:24:06.157090 containerd[1645]: time="2026-01-14T01:24:06.157021248Z" level=error msg="Failed to destroy network for sandbox \"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.158179 containerd[1645]: time="2026-01-14T01:24:06.157298594Z" level=error msg="Failed to destroy network for sandbox \"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.162601 containerd[1645]: time="2026-01-14T01:24:06.162448003Z" level=error msg="Failed to destroy network for sandbox \"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.186628 containerd[1645]: time="2026-01-14T01:24:06.186243293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.199290 kubelet[2962]: E0114 01:24:06.199217 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.201407 kubelet[2962]: E0114 01:24:06.199357 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:06.201407 kubelet[2962]: E0114 01:24:06.199407 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:06.202206 kubelet[2962]: E0114 01:24:06.201978 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-55jfn_kube-system(6e870825-3e39-4a0f-8778-2b70ab8b554e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-55jfn_kube-system(6e870825-3e39-4a0f-8778-2b70ab8b554e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ffa806bd3d2a1ca4457d59955db90f7e7d06361006aca9d659b08ce0afdb6d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-55jfn" podUID="6e870825-3e39-4a0f-8778-2b70ab8b554e" Jan 14 01:24:06.205672 containerd[1645]: time="2026-01-14T01:24:06.205621947Z" level=error msg="Failed to destroy network for sandbox \"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.215209 systemd[1]: run-netns-cni\x2d20b9093a\x2d684e\x2d366f\x2d9e0f\x2dcc5db139a68d.mount: Deactivated successfully. Jan 14 01:24:06.217061 containerd[1645]: time="2026-01-14T01:24:06.216923164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d87db9b87-z6lnj,Uid:3be20087-8a42-4c3d-9995-444849dfca4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.217281 kubelet[2962]: E0114 01:24:06.217231 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.217435 kubelet[2962]: E0114 01:24:06.217305 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:06.217435 kubelet[2962]: E0114 01:24:06.217335 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:06.217435 kubelet[2962]: E0114 01:24:06.217390 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d87db9b87-z6lnj_calico-system(3be20087-8a42-4c3d-9995-444849dfca4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d87db9b87-z6lnj_calico-system(3be20087-8a42-4c3d-9995-444849dfca4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d387fa9e00d131f3efe9bd357d8297e0b86793e9b19982ff42d135db2bedcac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d87db9b87-z6lnj" podUID="3be20087-8a42-4c3d-9995-444849dfca4c" Jan 14 01:24:06.227484 containerd[1645]: time="2026-01-14T01:24:06.227383982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.228117 kubelet[2962]: E0114 01:24:06.227910 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.228117 kubelet[2962]: E0114 01:24:06.227976 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:06.228117 kubelet[2962]: E0114 01:24:06.228054 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:06.230163 kubelet[2962]: E0114 01:24:06.228127 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c01a063ecb3d348e7ede7a7bcd786647e6ac60d522288509fa62c00ec30712fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:24:06.241027 containerd[1645]: time="2026-01-14T01:24:06.240860555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.241637 kubelet[2962]: E0114 01:24:06.241574 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.241827 kubelet[2962]: E0114 01:24:06.241667 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:06.241827 kubelet[2962]: E0114 01:24:06.241699 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:06.242435 kubelet[2962]: E0114 01:24:06.241772 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p8d54_kube-system(26746a71-3f83-4a0a-9dda-605fbba4fc61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p8d54_kube-system(26746a71-3f83-4a0a-9dda-605fbba4fc61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"768810379b60fa1d716b1114dddd9c61818df3e88d1a619b56298002f09b58a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8d54" podUID="26746a71-3f83-4a0a-9dda-605fbba4fc61" Jan 14 01:24:06.249954 containerd[1645]: time="2026-01-14T01:24:06.249916701Z" level=error msg="Failed to destroy network for sandbox \"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.256519 containerd[1645]: time="2026-01-14T01:24:06.256465230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.258809 systemd[1]: run-netns-cni\x2d1baf67dc\x2de8d6\x2dcf56\x2d6918\x2de0a3d3bb5915.mount: Deactivated successfully. Jan 14 01:24:06.260420 kubelet[2962]: E0114 01:24:06.260064 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.260420 kubelet[2962]: E0114 01:24:06.260150 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:06.260420 kubelet[2962]: E0114 01:24:06.260183 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:06.260626 kubelet[2962]: E0114 01:24:06.260237 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ebaab106dfb8780e6602b4d3914a2b773e2a9bce3de021deab2a95b9495cbbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:24:06.273987 containerd[1645]: time="2026-01-14T01:24:06.273924639Z" level=error msg="Failed to destroy network for sandbox \"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.279310 containerd[1645]: time="2026-01-14T01:24:06.273995200Z" level=error msg="Failed to destroy network for sandbox \"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.280179 systemd[1]: run-netns-cni\x2d7f9dbaba\x2dd2a4\x2da51e\x2dc026\x2d8eef954c032c.mount: Deactivated successfully. Jan 14 01:24:06.284853 containerd[1645]: time="2026-01-14T01:24:06.284662761Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.286008 kubelet[2962]: E0114 01:24:06.285849 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.286441 kubelet[2962]: E0114 01:24:06.285965 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:06.286441 kubelet[2962]: E0114 01:24:06.286385 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:06.287101 kubelet[2962]: E0114 01:24:06.286738 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40b9f2c16c999e48fe09f53cd963fddedf3623ae2a24a36757d3de7d8af42241\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:06.290676 systemd[1]: run-netns-cni\x2d86d90e69\x2d5939\x2d59fa\x2d5d08\x2d29e0766789e9.mount: Deactivated successfully. Jan 14 01:24:06.320844 containerd[1645]: time="2026-01-14T01:24:06.319365538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.321101 kubelet[2962]: E0114 01:24:06.319671 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.321101 kubelet[2962]: E0114 01:24:06.319744 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:06.321101 kubelet[2962]: E0114 01:24:06.319774 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:06.321379 kubelet[2962]: E0114 01:24:06.320877 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17248fcbc4fb7188cc701becf94596b194a6d75b96c7edfbf1a23cf734f2ef56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:06.345301 containerd[1645]: time="2026-01-14T01:24:06.327051903Z" level=error msg="Failed to destroy network for sandbox \"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.348858 containerd[1645]: time="2026-01-14T01:24:06.348718112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.349678 kubelet[2962]: E0114 01:24:06.349076 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:06.349678 kubelet[2962]: E0114 01:24:06.349149 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:24:06.349678 kubelet[2962]: E0114 01:24:06.349188 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:24:06.349901 kubelet[2962]: E0114 01:24:06.349265 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f022757975419ad003ddae113d5545c68db552ac3977f2a8eca916143a50aa33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:07.170943 systemd[1]: run-netns-cni\x2d8bb9bbaf\x2d90ab\x2d5801\x2db325\x2d102a34714771.mount: Deactivated successfully. Jan 14 01:24:18.862227 containerd[1645]: time="2026-01-14T01:24:18.854120010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:18.868887 containerd[1645]: time="2026-01-14T01:24:18.855179391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:19.125373 containerd[1645]: time="2026-01-14T01:24:19.125197789Z" level=error msg="Failed to destroy network for sandbox \"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.142210 systemd[1]: run-netns-cni\x2d11d40177\x2d8e67\x2d8a7c\x2d0b4e\x2d4b9dca433267.mount: Deactivated successfully. Jan 14 01:24:19.152841 containerd[1645]: time="2026-01-14T01:24:19.148453840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.152841 containerd[1645]: time="2026-01-14T01:24:19.148699119Z" level=error msg="Failed to destroy network for sandbox \"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.153091 kubelet[2962]: E0114 01:24:19.148937 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.153091 kubelet[2962]: E0114 01:24:19.149092 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:19.153091 kubelet[2962]: E0114 01:24:19.149135 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-55jfn" Jan 14 01:24:19.156209 kubelet[2962]: E0114 01:24:19.152332 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-55jfn_kube-system(6e870825-3e39-4a0f-8778-2b70ab8b554e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-55jfn_kube-system(6e870825-3e39-4a0f-8778-2b70ab8b554e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1b4cac2ce4478e6f0a01b9d150c6367fa41a03a97e5a0e04b61932cac02d9fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-55jfn" podUID="6e870825-3e39-4a0f-8778-2b70ab8b554e" Jan 14 01:24:19.155067 systemd[1]: run-netns-cni\x2d9b5571c8\x2dc0d6\x2d5841\x2d46aa\x2dcee4909f332b.mount: Deactivated successfully. Jan 14 01:24:19.158643 containerd[1645]: time="2026-01-14T01:24:19.158497311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.159290 kubelet[2962]: E0114 01:24:19.159227 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.159365 kubelet[2962]: E0114 01:24:19.159301 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:19.159365 kubelet[2962]: E0114 01:24:19.159330 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p8d54" Jan 14 01:24:19.159882 kubelet[2962]: E0114 01:24:19.159380 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p8d54_kube-system(26746a71-3f83-4a0a-9dda-605fbba4fc61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p8d54_kube-system(26746a71-3f83-4a0a-9dda-605fbba4fc61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85f8f934f2eb2dad8fe044010eac17bfbbe3952fd2206bdea344088a3fa592b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p8d54" podUID="26746a71-3f83-4a0a-9dda-605fbba4fc61" Jan 14 01:24:19.738472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount287216814.mount: Deactivated successfully. Jan 14 01:24:19.787435 containerd[1645]: time="2026-01-14T01:24:19.787135890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 01:24:19.800353 containerd[1645]: time="2026-01-14T01:24:19.800279640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:19.800733 containerd[1645]: time="2026-01-14T01:24:19.800693275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:19.803150 containerd[1645]: time="2026-01-14T01:24:19.803033973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:19.804111 containerd[1645]: time="2026-01-14T01:24:19.804079162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.755369795s" Jan 14 01:24:19.804257 containerd[1645]: time="2026-01-14T01:24:19.804229042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 01:24:19.805183 containerd[1645]: time="2026-01-14T01:24:19.804769647Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:19.807044 containerd[1645]: time="2026-01-14T01:24:19.807010311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:24:19.943501 containerd[1645]: time="2026-01-14T01:24:19.943411197Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:24:19.961831 containerd[1645]: time="2026-01-14T01:24:19.961220837Z" level=error msg="Failed to destroy network for sandbox \"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.964916 systemd[1]: run-netns-cni\x2d2ec2828b\x2d7599\x2d76b5\x2d5b9f\x2deb9eac057807.mount: Deactivated successfully. Jan 14 01:24:19.988241 containerd[1645]: time="2026-01-14T01:24:19.988136548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.989049 kubelet[2962]: E0114 01:24:19.988904 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:19.989243 kubelet[2962]: E0114 01:24:19.989208 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:19.989403 kubelet[2962]: E0114 01:24:19.989353 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" Jan 14 01:24:19.989617 kubelet[2962]: E0114 01:24:19.989577 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b52d8c040b8330beabaf751d20e92b7544e33cd32c5d0b462adf2e4763c3837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:20.004382 containerd[1645]: time="2026-01-14T01:24:20.004304518Z" level=error msg="Failed to destroy network for sandbox \"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:20.007622 systemd[1]: run-netns-cni\x2ddb5fc1d5\x2d1d4e\x2d85ec\x2db38b\x2dfa6f534adbab.mount: Deactivated successfully. Jan 14 01:24:20.018612 containerd[1645]: time="2026-01-14T01:24:20.018497927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:20.019763 kubelet[2962]: E0114 01:24:20.019167 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:20.019763 kubelet[2962]: E0114 01:24:20.019264 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:24:20.019763 kubelet[2962]: E0114 01:24:20.019298 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vqp7q" Jan 14 01:24:20.020134 kubelet[2962]: E0114 01:24:20.019406 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a3d57a8bb83ab3454e3b5af2ca06c6b7a25a76c3656a910a0ba1324ac2c6dc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:20.043066 containerd[1645]: time="2026-01-14T01:24:20.043022013Z" level=info msg="Container 7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:20.057825 containerd[1645]: time="2026-01-14T01:24:20.057120301Z" level=info msg="CreateContainer within sandbox \"0cf8103e732ca07eddd795de4e0fc806c87cefc92204f475383e5eeef85de621\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b\"" Jan 14 01:24:20.064817 containerd[1645]: time="2026-01-14T01:24:20.063261093Z" level=info msg="StartContainer for \"7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b\"" Jan 14 01:24:20.068634 containerd[1645]: time="2026-01-14T01:24:20.068581958Z" level=info msg="connecting to shim 7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b" address="unix:///run/containerd/s/491f016c9b97042d45b65a6466ca6a19927257dfca953cb1aeb5137ce6ecd354" protocol=ttrpc version=3 Jan 14 01:24:20.215202 systemd[1]: Started cri-containerd-7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b.scope - libcontainer container 7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b. Jan 14 01:24:20.325000 audit: BPF prog-id=176 op=LOAD Jan 14 01:24:20.334513 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 01:24:20.335103 kernel: audit: type=1334 audit(1768353860.325:576): prog-id=176 op=LOAD Jan 14 01:24:20.325000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.343891 kernel: audit: type=1300 audit(1768353860.325:576): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.344054 kernel: audit: type=1327 audit(1768353860.325:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.325000 audit: BPF prog-id=177 op=LOAD Jan 14 01:24:20.349343 kernel: audit: type=1334 audit(1768353860.325:577): prog-id=177 op=LOAD Jan 14 01:24:20.325000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.359477 kernel: audit: type=1300 audit(1768353860.325:577): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.359578 kernel: audit: type=1327 audit(1768353860.325:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.325000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:24:20.370818 kernel: audit: type=1334 audit(1768353860.325:578): prog-id=177 op=UNLOAD Jan 14 01:24:20.325000 audit[4042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.378997 kernel: audit: type=1300 audit(1768353860.325:578): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.379078 kernel: audit: type=1327 audit(1768353860.325:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.383842 kernel: audit: type=1334 audit(1768353860.325:579): prog-id=176 op=UNLOAD Jan 14 01:24:20.325000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:24:20.325000 audit[4042]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.325000 audit: BPF prog-id=178 op=LOAD Jan 14 01:24:20.325000 audit[4042]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3452 pid=4042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:20.325000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393662623461353964336237316262343038633839333334373739 Jan 14 01:24:20.409612 containerd[1645]: time="2026-01-14T01:24:20.409531906Z" level=info msg="StartContainer for \"7596bb4a59d3b71bb408c89334779caa5ec4ee69d25868088f3f34e17524f65b\" returns successfully" Jan 14 01:24:20.795230 containerd[1645]: time="2026-01-14T01:24:20.795149528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:20.796097 containerd[1645]: time="2026-01-14T01:24:20.795472187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:20.800944 containerd[1645]: time="2026-01-14T01:24:20.800876805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:20.802742 containerd[1645]: time="2026-01-14T01:24:20.802486648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d87db9b87-z6lnj,Uid:3be20087-8a42-4c3d-9995-444849dfca4c,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:20.893409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3225895548.mount: Deactivated successfully. Jan 14 01:24:21.063882 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:24:21.064325 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:24:21.173252 containerd[1645]: time="2026-01-14T01:24:21.173094675Z" level=error msg="Failed to destroy network for sandbox \"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.178718 systemd[1]: run-netns-cni\x2d228f5a0b\x2d7a4e\x2d1a02\x2d66b2\x2d34872fbc2777.mount: Deactivated successfully. Jan 14 01:24:21.186847 containerd[1645]: time="2026-01-14T01:24:21.185850659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.199946 containerd[1645]: time="2026-01-14T01:24:21.198694090Z" level=error msg="Failed to destroy network for sandbox \"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.206271 systemd[1]: run-netns-cni\x2d3d4b523f\x2da04d\x2dd450\x2da079\x2de9e25db5c18c.mount: Deactivated successfully. Jan 14 01:24:21.217988 containerd[1645]: time="2026-01-14T01:24:21.217889466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.218229 kubelet[2962]: E0114 01:24:21.217916 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.218229 kubelet[2962]: E0114 01:24:21.218078 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:21.219842 kubelet[2962]: E0114 01:24:21.218124 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pn2c" Jan 14 01:24:21.222685 kubelet[2962]: E0114 01:24:21.221934 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58c8957baa3aa16c6b675f24c519024dd2f1a4336de60b4cb6d7cb5f5af65852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:21.233473 kubelet[2962]: E0114 01:24:21.231878 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.233473 kubelet[2962]: E0114 01:24:21.232325 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:21.233473 kubelet[2962]: E0114 01:24:21.232368 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" Jan 14 01:24:21.233711 kubelet[2962]: E0114 01:24:21.232445 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e731b8d6332ed1e693809a4ad0ea6cac12989079a07f799071d9f5557b070c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:24:21.366759 containerd[1645]: time="2026-01-14T01:24:21.366025260Z" level=error msg="Failed to destroy network for sandbox \"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.370463 containerd[1645]: time="2026-01-14T01:24:21.370418933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d87db9b87-z6lnj,Uid:3be20087-8a42-4c3d-9995-444849dfca4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.371768 systemd[1]: run-netns-cni\x2d0d518280\x2d530a\x2d0d61\x2dbdd6\x2d970199596e43.mount: Deactivated successfully. Jan 14 01:24:21.374579 kubelet[2962]: E0114 01:24:21.374526 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.374899 kubelet[2962]: E0114 01:24:21.374773 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:21.375140 kubelet[2962]: E0114 01:24:21.375015 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5d87db9b87-z6lnj" Jan 14 01:24:21.375549 kubelet[2962]: E0114 01:24:21.375116 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5d87db9b87-z6lnj_calico-system(3be20087-8a42-4c3d-9995-444849dfca4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5d87db9b87-z6lnj_calico-system(3be20087-8a42-4c3d-9995-444849dfca4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7823fae3d0cf119f6a1973d022c80873bcc6220ff4fa24eee351174ef1dfdcb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5d87db9b87-z6lnj" podUID="3be20087-8a42-4c3d-9995-444849dfca4c" Jan 14 01:24:21.443591 kubelet[2962]: I0114 01:24:21.442777 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-52jzt" podStartSLOduration=2.564336219 podStartE2EDuration="33.414204908s" podCreationTimestamp="2026-01-14 01:23:48 +0000 UTC" firstStartedPulling="2026-01-14 01:23:48.958370642 +0000 UTC m=+23.498228624" lastFinishedPulling="2026-01-14 01:24:19.808239327 +0000 UTC m=+54.348097313" observedRunningTime="2026-01-14 01:24:21.374251928 +0000 UTC m=+55.914109930" watchObservedRunningTime="2026-01-14 01:24:21.414204908 +0000 UTC m=+55.954062904" Jan 14 01:24:21.452040 containerd[1645]: time="2026-01-14T01:24:21.451943280Z" level=error msg="Failed to destroy network for sandbox \"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.456964 systemd[1]: run-netns-cni\x2d8684bfe2\x2d4ff7\x2db085\x2d1d59\x2d63777709a383.mount: Deactivated successfully. Jan 14 01:24:21.461907 containerd[1645]: time="2026-01-14T01:24:21.461818119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.462887 kubelet[2962]: E0114 01:24:21.462393 2962 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:24:21.463208 kubelet[2962]: E0114 01:24:21.463064 2962 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:21.463766 kubelet[2962]: E0114 01:24:21.463102 2962 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" Jan 14 01:24:21.463766 kubelet[2962]: E0114 01:24:21.463395 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44e2c124b3ac61c841282e413c71a9391fad8214bb8c7c3ee45dbe3ff0fbdcec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:24:22.313893 kubelet[2962]: I0114 01:24:22.313123 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-backend-key-pair\") pod \"3be20087-8a42-4c3d-9995-444849dfca4c\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " Jan 14 01:24:22.314716 kubelet[2962]: I0114 01:24:22.314405 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-ca-bundle\") pod \"3be20087-8a42-4c3d-9995-444849dfca4c\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " Jan 14 01:24:22.314716 kubelet[2962]: I0114 01:24:22.314445 2962 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mvdd\" (UniqueName: \"kubernetes.io/projected/3be20087-8a42-4c3d-9995-444849dfca4c-kube-api-access-5mvdd\") pod \"3be20087-8a42-4c3d-9995-444849dfca4c\" (UID: \"3be20087-8a42-4c3d-9995-444849dfca4c\") " Jan 14 01:24:22.337037 systemd[1]: var-lib-kubelet-pods-3be20087\x2d8a42\x2d4c3d\x2d9995\x2d444849dfca4c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:24:22.344341 systemd[1]: var-lib-kubelet-pods-3be20087\x2d8a42\x2d4c3d\x2d9995\x2d444849dfca4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5mvdd.mount: Deactivated successfully. Jan 14 01:24:22.345090 kubelet[2962]: I0114 01:24:22.343414 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3be20087-8a42-4c3d-9995-444849dfca4c" (UID: "3be20087-8a42-4c3d-9995-444849dfca4c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:24:22.345090 kubelet[2962]: I0114 01:24:22.344510 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3be20087-8a42-4c3d-9995-444849dfca4c-kube-api-access-5mvdd" (OuterVolumeSpecName: "kube-api-access-5mvdd") pod "3be20087-8a42-4c3d-9995-444849dfca4c" (UID: "3be20087-8a42-4c3d-9995-444849dfca4c"). InnerVolumeSpecName "kube-api-access-5mvdd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:24:22.345399 kubelet[2962]: I0114 01:24:22.340459 2962 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3be20087-8a42-4c3d-9995-444849dfca4c" (UID: "3be20087-8a42-4c3d-9995-444849dfca4c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:24:22.415553 kubelet[2962]: I0114 01:24:22.415481 2962 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5mvdd\" (UniqueName: \"kubernetes.io/projected/3be20087-8a42-4c3d-9995-444849dfca4c-kube-api-access-5mvdd\") on node \"srv-aufav.gb1.brightbox.com\" DevicePath \"\"" Jan 14 01:24:22.415553 kubelet[2962]: I0114 01:24:22.415539 2962 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-backend-key-pair\") on node \"srv-aufav.gb1.brightbox.com\" DevicePath \"\"" Jan 14 01:24:22.415553 kubelet[2962]: I0114 01:24:22.415558 2962 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3be20087-8a42-4c3d-9995-444849dfca4c-whisker-ca-bundle\") on node \"srv-aufav.gb1.brightbox.com\" DevicePath \"\"" Jan 14 01:24:23.185530 systemd[1]: Removed slice kubepods-besteffort-pod3be20087_8a42_4c3d_9995_444849dfca4c.slice - libcontainer container kubepods-besteffort-pod3be20087_8a42_4c3d_9995_444849dfca4c.slice. Jan 14 01:24:23.369992 systemd[1]: Created slice kubepods-besteffort-podabdf21ba_b557_4e34_8da0_caa287f29fb9.slice - libcontainer container kubepods-besteffort-podabdf21ba_b557_4e34_8da0_caa287f29fb9.slice. Jan 14 01:24:23.425004 kubelet[2962]: I0114 01:24:23.424320 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/abdf21ba-b557-4e34-8da0-caa287f29fb9-whisker-ca-bundle\") pod \"whisker-6f9465cb95-lvhtr\" (UID: \"abdf21ba-b557-4e34-8da0-caa287f29fb9\") " pod="calico-system/whisker-6f9465cb95-lvhtr" Jan 14 01:24:23.427587 kubelet[2962]: I0114 01:24:23.426735 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/abdf21ba-b557-4e34-8da0-caa287f29fb9-whisker-backend-key-pair\") pod \"whisker-6f9465cb95-lvhtr\" (UID: \"abdf21ba-b557-4e34-8da0-caa287f29fb9\") " pod="calico-system/whisker-6f9465cb95-lvhtr" Jan 14 01:24:23.427587 kubelet[2962]: I0114 01:24:23.427289 2962 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw54k\" (UniqueName: \"kubernetes.io/projected/abdf21ba-b557-4e34-8da0-caa287f29fb9-kube-api-access-gw54k\") pod \"whisker-6f9465cb95-lvhtr\" (UID: \"abdf21ba-b557-4e34-8da0-caa287f29fb9\") " pod="calico-system/whisker-6f9465cb95-lvhtr" Jan 14 01:24:23.681740 containerd[1645]: time="2026-01-14T01:24:23.680948627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9465cb95-lvhtr,Uid:abdf21ba-b557-4e34-8da0-caa287f29fb9,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:23.809257 kubelet[2962]: I0114 01:24:23.808612 2962 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be20087-8a42-4c3d-9995-444849dfca4c" path="/var/lib/kubelet/pods/3be20087-8a42-4c3d-9995-444849dfca4c/volumes" Jan 14 01:24:24.154256 systemd-networkd[1555]: cali6cc8ab340aa: Link UP Jan 14 01:24:24.158758 systemd-networkd[1555]: cali6cc8ab340aa: Gained carrier Jan 14 01:24:24.217834 containerd[1645]: 2026-01-14 01:24:23.752 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:24:24.217834 containerd[1645]: 2026-01-14 01:24:23.793 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0 whisker-6f9465cb95- calico-system abdf21ba-b557-4e34-8da0-caa287f29fb9 947 0 2026-01-14 01:24:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f9465cb95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com whisker-6f9465cb95-lvhtr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6cc8ab340aa [] [] }} ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-" Jan 14 01:24:24.217834 containerd[1645]: 2026-01-14 01:24:23.793 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.217834 containerd[1645]: 2026-01-14 01:24:24.039 [INFO][4393] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" HandleID="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Workload="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.042 [INFO][4393] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" HandleID="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Workload="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102230), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"whisker-6f9465cb95-lvhtr", "timestamp":"2026-01-14 01:24:24.039845082 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.042 [INFO][4393] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.043 [INFO][4393] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.043 [INFO][4393] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.062 [INFO][4393] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.078 [INFO][4393] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.086 [INFO][4393] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.090 [INFO][4393] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.221236 containerd[1645]: 2026-01-14 01:24:24.094 [INFO][4393] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.097 [INFO][4393] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.103 [INFO][4393] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34 Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.112 [INFO][4393] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.122 [INFO][4393] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.65/26] block=192.168.126.64/26 handle="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.123 [INFO][4393] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.65/26] handle="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.123 [INFO][4393] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:24.228140 containerd[1645]: 2026-01-14 01:24:24.123 [INFO][4393] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.65/26] IPv6=[] ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" HandleID="k8s-pod-network.360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Workload="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.233349 containerd[1645]: 2026-01-14 01:24:24.127 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0", GenerateName:"whisker-6f9465cb95-", Namespace:"calico-system", SelfLink:"", UID:"abdf21ba-b557-4e34-8da0-caa287f29fb9", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9465cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"whisker-6f9465cb95-lvhtr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6cc8ab340aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:24.233349 containerd[1645]: 2026-01-14 01:24:24.128 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.65/32] ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.233560 containerd[1645]: 2026-01-14 01:24:24.128 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6cc8ab340aa ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.233560 containerd[1645]: 2026-01-14 01:24:24.160 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.235514 containerd[1645]: 2026-01-14 01:24:24.164 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0", GenerateName:"whisker-6f9465cb95-", Namespace:"calico-system", SelfLink:"", UID:"abdf21ba-b557-4e34-8da0-caa287f29fb9", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9465cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34", Pod:"whisker-6f9465cb95-lvhtr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.126.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6cc8ab340aa", MAC:"c6:e5:1a:01:49:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:24.235645 containerd[1645]: 2026-01-14 01:24:24.193 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" Namespace="calico-system" Pod="whisker-6f9465cb95-lvhtr" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-whisker--6f9465cb95--lvhtr-eth0" Jan 14 01:24:24.315000 audit: BPF prog-id=179 op=LOAD Jan 14 01:24:24.315000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66f34370 a2=98 a3=1fffffffffffffff items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.315000 audit: BPF prog-id=179 op=UNLOAD Jan 14 01:24:24.315000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd66f34340 a3=0 items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.315000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.316000 audit: BPF prog-id=180 op=LOAD Jan 14 01:24:24.316000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66f34250 a2=94 a3=3 items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.316000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.317000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:24:24.317000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd66f34250 a2=94 a3=3 items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.317000 audit: BPF prog-id=181 op=LOAD Jan 14 01:24:24.317000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd66f34290 a2=94 a3=7ffd66f34470 items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.317000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:24:24.317000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffd66f34290 a2=94 a3=7ffd66f34470 items=0 ppid=4283 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.317000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:24:24.320000 audit: BPF prog-id=182 op=LOAD Jan 14 01:24:24.320000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd8363c050 a2=98 a3=3 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.320000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:24:24.320000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd8363c020 a3=0 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.320000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.321000 audit: BPF prog-id=183 op=LOAD Jan 14 01:24:24.321000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8363be40 a2=94 a3=54428f items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.321000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.321000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:24:24.321000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8363be40 a2=94 a3=54428f items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.321000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.321000 audit: BPF prog-id=184 op=LOAD Jan 14 01:24:24.321000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8363be70 a2=94 a3=2 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.321000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.322000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:24:24.322000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8363be70 a2=0 a3=2 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.322000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.422838 containerd[1645]: time="2026-01-14T01:24:24.420159204Z" level=info msg="connecting to shim 360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34" address="unix:///run/containerd/s/f8477f8f41dc090517903d07108d123b7c594b6730e45991d09fc9d48dafc583" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:24.508120 systemd[1]: Started cri-containerd-360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34.scope - libcontainer container 360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34. Jan 14 01:24:24.534000 audit: BPF prog-id=185 op=LOAD Jan 14 01:24:24.535000 audit: BPF prog-id=186 op=LOAD Jan 14 01:24:24.535000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.535000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:24:24.535000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.536000 audit: BPF prog-id=187 op=LOAD Jan 14 01:24:24.536000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.536000 audit: BPF prog-id=188 op=LOAD Jan 14 01:24:24.536000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.536000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:24:24.536000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.536000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:24:24.536000 audit[4461]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.536000 audit: BPF prog-id=189 op=LOAD Jan 14 01:24:24.536000 audit[4461]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4451 pid=4461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336306336316232666362616262366435333039373666306235386436 Jan 14 01:24:24.626158 containerd[1645]: time="2026-01-14T01:24:24.626040421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9465cb95-lvhtr,Uid:abdf21ba-b557-4e34-8da0-caa287f29fb9,Namespace:calico-system,Attempt:0,} returns sandbox id \"360c61b2fcbabb6d530976f0b58d6a21de0157c4daf3946f297e581c39a07c34\"" Jan 14 01:24:24.639769 containerd[1645]: time="2026-01-14T01:24:24.639517547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:24:24.658000 audit: BPF prog-id=190 op=LOAD Jan 14 01:24:24.658000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd8363bd30 a2=94 a3=1 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.658000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.659000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:24:24.659000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd8363bd30 a2=94 a3=1 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.659000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.675000 audit: BPF prog-id=191 op=LOAD Jan 14 01:24:24.675000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8363bd20 a2=94 a3=4 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.675000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:24:24.675000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8363bd20 a2=0 a3=4 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.676000 audit: BPF prog-id=192 op=LOAD Jan 14 01:24:24.676000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd8363bb80 a2=94 a3=5 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.676000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:24:24.676000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd8363bb80 a2=0 a3=5 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.676000 audit: BPF prog-id=193 op=LOAD Jan 14 01:24:24.676000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8363bda0 a2=94 a3=6 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.676000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:24:24.676000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd8363bda0 a2=0 a3=6 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.676000 audit: BPF prog-id=194 op=LOAD Jan 14 01:24:24.676000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd8363b550 a2=94 a3=88 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.677000 audit: BPF prog-id=195 op=LOAD Jan 14 01:24:24.677000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd8363b3d0 a2=94 a3=2 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.677000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:24:24.677000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd8363b400 a2=0 a3=7ffd8363b500 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.677000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:24:24.677000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=81d1d10 a2=0 a3=b79b552464346e89 items=0 ppid=4283 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:24:24.695000 audit: BPF prog-id=196 op=LOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff912ed600 a2=98 a3=1999999999999999 items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.695000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff912ed5d0 a3=0 items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.695000 audit: BPF prog-id=197 op=LOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff912ed4e0 a2=94 a3=ffff items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.695000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff912ed4e0 a2=94 a3=ffff items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.695000 audit: BPF prog-id=198 op=LOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff912ed520 a2=94 a3=7fff912ed700 items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.695000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:24:24.695000 audit[4488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff912ed520 a2=94 a3=7fff912ed700 items=0 ppid=4283 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.695000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:24:24.794710 systemd-networkd[1555]: vxlan.calico: Link UP Jan 14 01:24:24.794735 systemd-networkd[1555]: vxlan.calico: Gained carrier Jan 14 01:24:24.829000 audit: BPF prog-id=199 op=LOAD Jan 14 01:24:24.829000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5a22ced0 a2=98 a3=20 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.829000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.829000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:24:24.829000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe5a22cea0 a3=0 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.829000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.830000 audit: BPF prog-id=200 op=LOAD Jan 14 01:24:24.830000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5a22cce0 a2=94 a3=54428f items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.830000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.830000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:24:24.830000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe5a22cce0 a2=94 a3=54428f items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.830000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.830000 audit: BPF prog-id=201 op=LOAD Jan 14 01:24:24.830000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe5a22cd10 a2=94 a3=2 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.830000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.831000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:24:24.831000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffe5a22cd10 a2=0 a3=2 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.831000 audit: BPF prog-id=202 op=LOAD Jan 14 01:24:24.831000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5a22cac0 a2=94 a3=4 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.831000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:24:24.831000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5a22cac0 a2=94 a3=4 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.831000 audit: BPF prog-id=203 op=LOAD Jan 14 01:24:24.831000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5a22cbc0 a2=94 a3=7ffe5a22cd40 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.831000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:24:24.831000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5a22cbc0 a2=0 a3=7ffe5a22cd40 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.831000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.839000 audit: BPF prog-id=204 op=LOAD Jan 14 01:24:24.839000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5a22c2f0 a2=94 a3=2 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.839000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:24:24.839000 audit[4513]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe5a22c2f0 a2=0 a3=2 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.839000 audit: BPF prog-id=205 op=LOAD Jan 14 01:24:24.839000 audit[4513]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe5a22c3f0 a2=94 a3=30 items=0 ppid=4283 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:24:24.856000 audit: BPF prog-id=206 op=LOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd56b0d240 a2=98 a3=0 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.856000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd56b0d210 a3=0 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.856000 audit: BPF prog-id=207 op=LOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd56b0d030 a2=94 a3=54428f items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.856000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd56b0d030 a2=94 a3=54428f items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.856000 audit: BPF prog-id=208 op=LOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd56b0d060 a2=94 a3=2 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.856000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:24:24.856000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd56b0d060 a2=0 a3=2 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:24.856000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:24.967484 containerd[1645]: time="2026-01-14T01:24:24.967265609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:24.973301 containerd[1645]: time="2026-01-14T01:24:24.973094201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:24:24.973530 containerd[1645]: time="2026-01-14T01:24:24.973144549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:24.976298 kubelet[2962]: E0114 01:24:24.976225 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:24:24.977100 kubelet[2962]: E0114 01:24:24.977017 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:24:24.990137 kubelet[2962]: E0114 01:24:24.990042 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:419b9865beaf4f708afcaeed39d95459,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:24.993400 containerd[1645]: time="2026-01-14T01:24:24.993295398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:24:25.130000 audit: BPF prog-id=209 op=LOAD Jan 14 01:24:25.130000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd56b0cf20 a2=94 a3=1 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.130000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.131000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:24:25.131000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd56b0cf20 a2=94 a3=1 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.131000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.146000 audit: BPF prog-id=210 op=LOAD Jan 14 01:24:25.146000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd56b0cf10 a2=94 a3=4 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.147000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:24:25.147000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd56b0cf10 a2=0 a3=4 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.147000 audit: BPF prog-id=211 op=LOAD Jan 14 01:24:25.147000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd56b0cd70 a2=94 a3=5 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.148000 audit: BPF prog-id=211 op=UNLOAD Jan 14 01:24:25.148000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd56b0cd70 a2=0 a3=5 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.148000 audit: BPF prog-id=212 op=LOAD Jan 14 01:24:25.148000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd56b0cf90 a2=94 a3=6 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.148000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:24:25.148000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd56b0cf90 a2=0 a3=6 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.148000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.149000 audit: BPF prog-id=213 op=LOAD Jan 14 01:24:25.149000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd56b0c740 a2=94 a3=88 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.149000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.150000 audit: BPF prog-id=214 op=LOAD Jan 14 01:24:25.150000 audit[4520]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd56b0c5c0 a2=94 a3=2 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.150000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.150000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:24:25.150000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd56b0c5f0 a2=0 a3=7ffd56b0c6f0 items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.150000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.151000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:24:25.151000 audit[4520]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=148eed10 a2=0 a3=3b9d53e8c435049f items=0 ppid=4283 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.151000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:24:25.164000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:24:25.164000 audit[4283]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0008f8440 a2=0 a3=0 items=0 ppid=4268 pid=4283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.164000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:24:25.253000 audit[4552]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4552 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:25.253000 audit[4552]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fffbd6fb3a0 a2=0 a3=7fffbd6fb38c items=0 ppid=4283 pid=4552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.253000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:25.256000 audit[4554]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:25.256000 audit[4554]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff921e8390 a2=0 a3=7fff921e837c items=0 ppid=4283 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.256000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:25.268000 audit[4553]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4553 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:25.268000 audit[4553]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc69c14d20 a2=0 a3=7ffc69c14d0c items=0 ppid=4283 pid=4553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.268000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:25.275000 audit[4557]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:25.275000 audit[4557]: SYSCALL arch=c000003e syscall=46 success=yes exit=53116 a0=3 a1=7ffff4d13770 a2=0 a3=7ffff4d1375c items=0 ppid=4283 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:25.275000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:25.307574 containerd[1645]: time="2026-01-14T01:24:25.307407779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:25.310025 containerd[1645]: time="2026-01-14T01:24:25.309960173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:25.310749 containerd[1645]: time="2026-01-14T01:24:25.310541224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:24:25.312323 kubelet[2962]: E0114 01:24:25.312071 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:24:25.312323 kubelet[2962]: E0114 01:24:25.312144 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:24:25.314821 kubelet[2962]: E0114 01:24:25.313409 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:25.315262 kubelet[2962]: E0114 01:24:25.315204 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:24:26.164507 systemd-networkd[1555]: cali6cc8ab340aa: Gained IPv6LL Jan 14 01:24:26.187876 kubelet[2962]: E0114 01:24:26.187227 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:24:26.265164 kernel: kauditd_printk_skb: 225 callbacks suppressed Jan 14 01:24:26.265434 kernel: audit: type=1325 audit(1768353866.259:655): table=filter:125 family=2 entries=20 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:26.259000 audit[4571]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:26.259000 audit[4571]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdea17c9a0 a2=0 a3=7ffdea17c98c items=0 ppid=3108 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:26.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:26.276086 kernel: audit: type=1300 audit(1768353866.259:655): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdea17c9a0 a2=0 a3=7ffdea17c98c items=0 ppid=3108 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:26.276162 kernel: audit: type=1327 audit(1768353866.259:655): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:26.278000 audit[4571]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:26.278000 audit[4571]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdea17c9a0 a2=0 a3=0 items=0 ppid=3108 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:26.284183 kernel: audit: type=1325 audit(1768353866.278:656): table=nat:126 family=2 entries=14 op=nft_register_rule pid=4571 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:26.284300 kernel: audit: type=1300 audit(1768353866.278:656): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdea17c9a0 a2=0 a3=0 items=0 ppid=3108 pid=4571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:26.278000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:26.289110 kernel: audit: type=1327 audit(1768353866.278:656): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:26.548058 systemd-networkd[1555]: vxlan.calico: Gained IPv6LL Jan 14 01:24:31.796885 containerd[1645]: time="2026-01-14T01:24:31.796273366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:32.010263 systemd-networkd[1555]: calib45600ad409: Link UP Jan 14 01:24:32.012044 systemd-networkd[1555]: calib45600ad409: Gained carrier Jan 14 01:24:32.036645 containerd[1645]: 2026-01-14 01:24:31.887 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0 coredns-668d6bf9bc- kube-system 6e870825-3e39-4a0f-8778-2b70ab8b554e 856 0 2026-01-14 01:23:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com coredns-668d6bf9bc-55jfn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib45600ad409 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-" Jan 14 01:24:32.036645 containerd[1645]: 2026-01-14 01:24:31.887 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.036645 containerd[1645]: 2026-01-14 01:24:31.946 [INFO][4592] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" HandleID="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.946 [INFO][4592] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" HandleID="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cbbf0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-55jfn", "timestamp":"2026-01-14 01:24:31.946218956 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.947 [INFO][4592] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.947 [INFO][4592] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.947 [INFO][4592] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.961 [INFO][4592] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.968 [INFO][4592] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.975 [INFO][4592] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.978 [INFO][4592] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.037744 containerd[1645]: 2026-01-14 01:24:31.982 [INFO][4592] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.982 [INFO][4592] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.985 [INFO][4592] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34 Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.991 [INFO][4592] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.998 [INFO][4592] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.66/26] block=192.168.126.64/26 handle="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.999 [INFO][4592] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.66/26] handle="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.999 [INFO][4592] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:32.038737 containerd[1645]: 2026-01-14 01:24:31.999 [INFO][4592] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.66/26] IPv6=[] ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" HandleID="k8s-pod-network.19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.004 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e870825-3e39-4a0f-8778-2b70ab8b554e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-55jfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib45600ad409", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.005 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.66/32] ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.005 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib45600ad409 ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.012 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.015 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"6e870825-3e39-4a0f-8778-2b70ab8b554e", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34", Pod:"coredns-668d6bf9bc-55jfn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib45600ad409", MAC:"36:cd:70:ec:a5:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:32.040489 containerd[1645]: 2026-01-14 01:24:32.029 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" Namespace="kube-system" Pod="coredns-668d6bf9bc-55jfn" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--55jfn-eth0" Jan 14 01:24:32.082127 containerd[1645]: time="2026-01-14T01:24:32.081972040Z" level=info msg="connecting to shim 19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34" address="unix:///run/containerd/s/57f99e2fd364b4e4692672d2d5db1e57a62049f21e7bf3856b8e55f24f008376" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:32.100000 audit[4629]: NETFILTER_CFG table=filter:127 family=2 entries=42 op=nft_register_chain pid=4629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:32.108821 kernel: audit: type=1325 audit(1768353872.100:657): table=filter:127 family=2 entries=42 op=nft_register_chain pid=4629 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:32.116179 kernel: audit: type=1300 audit(1768353872.100:657): arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffc8daf84b0 a2=0 a3=7ffc8daf849c items=0 ppid=4283 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.100000 audit[4629]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffc8daf84b0 a2=0 a3=7ffc8daf849c items=0 ppid=4283 pid=4629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.100000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:32.122886 kernel: audit: type=1327 audit(1768353872.100:657): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:32.150130 systemd[1]: Started cri-containerd-19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34.scope - libcontainer container 19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34. Jan 14 01:24:32.171000 audit: BPF prog-id=215 op=LOAD Jan 14 01:24:32.174856 kernel: audit: type=1334 audit(1768353872.171:658): prog-id=215 op=LOAD Jan 14 01:24:32.174000 audit: BPF prog-id=216 op=LOAD Jan 14 01:24:32.174000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.178816 kernel: audit: type=1334 audit(1768353872.174:659): prog-id=216 op=LOAD Jan 14 01:24:32.178946 kernel: audit: type=1300 audit(1768353872.174:659): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.189964 kernel: audit: type=1327 audit(1768353872.174:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.174000 audit: BPF prog-id=216 op=UNLOAD Jan 14 01:24:32.194300 kernel: audit: type=1334 audit(1768353872.174:660): prog-id=216 op=UNLOAD Jan 14 01:24:32.174000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.196604 kernel: audit: type=1300 audit(1768353872.174:660): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.174000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.202219 kernel: audit: type=1327 audit(1768353872.174:660): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.175000 audit: BPF prog-id=217 op=LOAD Jan 14 01:24:32.175000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.175000 audit: BPF prog-id=218 op=LOAD Jan 14 01:24:32.175000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.175000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:24:32.175000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.175000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:24:32.175000 audit[4633]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.175000 audit: BPF prog-id=219 op=LOAD Jan 14 01:24:32.175000 audit[4633]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4622 pid=4633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139613438353432376230343630356638396438396331626566626566 Jan 14 01:24:32.260233 containerd[1645]: time="2026-01-14T01:24:32.260173718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-55jfn,Uid:6e870825-3e39-4a0f-8778-2b70ab8b554e,Namespace:kube-system,Attempt:0,} returns sandbox id \"19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34\"" Jan 14 01:24:32.266907 containerd[1645]: time="2026-01-14T01:24:32.266826622Z" level=info msg="CreateContainer within sandbox \"19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:24:32.289368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3222938463.mount: Deactivated successfully. Jan 14 01:24:32.291215 containerd[1645]: time="2026-01-14T01:24:32.289450803Z" level=info msg="Container 8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:32.298572 containerd[1645]: time="2026-01-14T01:24:32.298464389Z" level=info msg="CreateContainer within sandbox \"19a485427b04605f89d89c1befbef2be84d755e7b6ede264baf6db0ff80e5e34\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739\"" Jan 14 01:24:32.300363 containerd[1645]: time="2026-01-14T01:24:32.300223174Z" level=info msg="StartContainer for \"8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739\"" Jan 14 01:24:32.302625 containerd[1645]: time="2026-01-14T01:24:32.302591720Z" level=info msg="connecting to shim 8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739" address="unix:///run/containerd/s/57f99e2fd364b4e4692672d2d5db1e57a62049f21e7bf3856b8e55f24f008376" protocol=ttrpc version=3 Jan 14 01:24:32.333118 systemd[1]: Started cri-containerd-8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739.scope - libcontainer container 8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739. Jan 14 01:24:32.359000 audit: BPF prog-id=220 op=LOAD Jan 14 01:24:32.360000 audit: BPF prog-id=221 op=LOAD Jan 14 01:24:32.360000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0238 a2=98 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.360000 audit: BPF prog-id=221 op=UNLOAD Jan 14 01:24:32.360000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.360000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.361000 audit: BPF prog-id=222 op=LOAD Jan 14 01:24:32.361000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b0488 a2=98 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.361000 audit: BPF prog-id=223 op=LOAD Jan 14 01:24:32.361000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001b0218 a2=98 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.361000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:24:32.361000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.361000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:24:32.361000 audit[4661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.362000 audit: BPF prog-id=224 op=LOAD Jan 14 01:24:32.362000 audit[4661]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001b06e8 a2=98 a3=0 items=0 ppid=4622 pid=4661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:32.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835373463666432366232616136636634383061353638616233353135 Jan 14 01:24:32.392629 containerd[1645]: time="2026-01-14T01:24:32.392568598Z" level=info msg="StartContainer for \"8574cfd26b2aa6cf480a568ab3515cf87cd36927559acd8fa1778e2963010739\" returns successfully" Jan 14 01:24:32.795212 containerd[1645]: time="2026-01-14T01:24:32.795152159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:32.978346 systemd-networkd[1555]: calid750b2b8a09: Link UP Jan 14 01:24:32.981366 systemd-networkd[1555]: calid750b2b8a09: Gained carrier Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.866 [INFO][4695] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0 csi-node-driver- calico-system 1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2 773 0 2026-01-14 01:23:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com csi-node-driver-vqp7q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid750b2b8a09 [] [] }} ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.867 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.918 [INFO][4707] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" HandleID="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Workload="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.918 [INFO][4707] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" HandleID="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Workload="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000100130), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"csi-node-driver-vqp7q", "timestamp":"2026-01-14 01:24:32.918115838 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.918 [INFO][4707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.918 [INFO][4707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.918 [INFO][4707] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.930 [INFO][4707] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.937 [INFO][4707] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.943 [INFO][4707] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.946 [INFO][4707] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.949 [INFO][4707] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.949 [INFO][4707] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.951 [INFO][4707] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.957 [INFO][4707] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.965 [INFO][4707] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.67/26] block=192.168.126.64/26 handle="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.965 [INFO][4707] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.67/26] handle="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.965 [INFO][4707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:33.018171 containerd[1645]: 2026-01-14 01:24:32.966 [INFO][4707] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.67/26] IPv6=[] ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" HandleID="k8s-pod-network.e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Workload="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:32.970 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-vqp7q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid750b2b8a09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:32.971 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.67/32] ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:32.971 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid750b2b8a09 ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:32.983 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:32.984 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e", Pod:"csi-node-driver-vqp7q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.126.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid750b2b8a09", MAC:"d2:c6:71:2b:98:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:33.021156 containerd[1645]: 2026-01-14 01:24:33.014 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" Namespace="calico-system" Pod="csi-node-driver-vqp7q" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-csi--node--driver--vqp7q-eth0" Jan 14 01:24:33.065000 audit[4722]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:33.065000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=20764 a0=3 a1=7ffc73f0e110 a2=0 a3=7ffc73f0e0fc items=0 ppid=4283 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.065000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:33.069450 containerd[1645]: time="2026-01-14T01:24:33.069377914Z" level=info msg="connecting to shim e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e" address="unix:///run/containerd/s/9bcf75ef18d945b4c5e67ab7ce267bef8422764997bee6ec0bc76d52d8c8f0d1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:33.119185 systemd[1]: Started cri-containerd-e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e.scope - libcontainer container e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e. Jan 14 01:24:33.140000 audit: BPF prog-id=225 op=LOAD Jan 14 01:24:33.141000 audit: BPF prog-id=226 op=LOAD Jan 14 01:24:33.141000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.141000 audit: BPF prog-id=226 op=UNLOAD Jan 14 01:24:33.141000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.142000 audit: BPF prog-id=227 op=LOAD Jan 14 01:24:33.142000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.142000 audit: BPF prog-id=228 op=LOAD Jan 14 01:24:33.142000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.143000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:24:33.143000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.143000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:24:33.143000 audit[4743]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.143000 audit: BPF prog-id=229 op=LOAD Jan 14 01:24:33.143000 audit[4743]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4732 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6532643436363931336330363061353762623338623733636131396664 Jan 14 01:24:33.175013 containerd[1645]: time="2026-01-14T01:24:33.174934917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vqp7q,Uid:1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e2d466913c060a57bb38b73ca19fd0179b44af336c0afe47a176707ad770c06e\"" Jan 14 01:24:33.177327 containerd[1645]: time="2026-01-14T01:24:33.177289365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:24:33.239663 kubelet[2962]: I0114 01:24:33.236769 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-55jfn" podStartSLOduration=63.236720084 podStartE2EDuration="1m3.236720084s" podCreationTimestamp="2026-01-14 01:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:24:33.234291248 +0000 UTC m=+67.774149254" watchObservedRunningTime="2026-01-14 01:24:33.236720084 +0000 UTC m=+67.776578076" Jan 14 01:24:33.275000 audit[4770]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:33.275000 audit[4770]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffcc5a00d70 a2=0 a3=7ffcc5a00d5c items=0 ppid=3108 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:33.281000 audit[4770]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4770 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:33.281000 audit[4770]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcc5a00d70 a2=0 a3=0 items=0 ppid=3108 pid=4770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.281000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:33.310000 audit[4772]: NETFILTER_CFG table=filter:131 family=2 entries=17 op=nft_register_rule pid=4772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:33.310000 audit[4772]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffeb1ebeb00 a2=0 a3=7ffeb1ebeaec items=0 ppid=3108 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:33.314000 audit[4772]: NETFILTER_CFG table=nat:132 family=2 entries=35 op=nft_register_chain pid=4772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:33.314000 audit[4772]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffeb1ebeb00 a2=0 a3=7ffeb1ebeaec items=0 ppid=3108 pid=4772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:33.314000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:33.475314 update_engine[1621]: I20260114 01:24:33.475178 1621 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 01:24:33.475314 update_engine[1621]: I20260114 01:24:33.475306 1621 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 01:24:33.477331 update_engine[1621]: I20260114 01:24:33.477257 1621 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 01:24:33.478289 update_engine[1621]: I20260114 01:24:33.478247 1621 omaha_request_params.cc:62] Current group set to alpha Jan 14 01:24:33.478559 update_engine[1621]: I20260114 01:24:33.478516 1621 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 01:24:33.478559 update_engine[1621]: I20260114 01:24:33.478546 1621 update_attempter.cc:643] Scheduling an action processor start. Jan 14 01:24:33.478654 update_engine[1621]: I20260114 01:24:33.478585 1621 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 01:24:33.478698 update_engine[1621]: I20260114 01:24:33.478677 1621 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 01:24:33.479066 update_engine[1621]: I20260114 01:24:33.478816 1621 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 01:24:33.479066 update_engine[1621]: I20260114 01:24:33.478843 1621 omaha_request_action.cc:272] Request: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: Jan 14 01:24:33.479066 update_engine[1621]: I20260114 01:24:33.478878 1621 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:24:33.486279 containerd[1645]: time="2026-01-14T01:24:33.486151474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:33.488232 containerd[1645]: time="2026-01-14T01:24:33.488180459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:24:33.488449 containerd[1645]: time="2026-01-14T01:24:33.488353505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:33.489258 kubelet[2962]: E0114 01:24:33.488951 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:24:33.489258 kubelet[2962]: E0114 01:24:33.489034 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:24:33.496501 kubelet[2962]: E0114 01:24:33.496433 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:33.500433 containerd[1645]: time="2026-01-14T01:24:33.500364050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:24:33.514302 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 01:24:33.519908 update_engine[1621]: I20260114 01:24:33.519356 1621 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:24:33.520930 update_engine[1621]: I20260114 01:24:33.520830 1621 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:24:33.550509 update_engine[1621]: E20260114 01:24:33.550376 1621 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Jan 14 01:24:33.550865 update_engine[1621]: I20260114 01:24:33.550534 1621 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 01:24:33.780013 systemd-networkd[1555]: calib45600ad409: Gained IPv6LL Jan 14 01:24:33.795132 containerd[1645]: time="2026-01-14T01:24:33.795080026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:33.796037 containerd[1645]: time="2026-01-14T01:24:33.795473436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:33.796037 containerd[1645]: time="2026-01-14T01:24:33.795613888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,}" Jan 14 01:24:33.823068 containerd[1645]: time="2026-01-14T01:24:33.821763730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:33.828222 containerd[1645]: time="2026-01-14T01:24:33.828172526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:24:33.828569 containerd[1645]: time="2026-01-14T01:24:33.828298474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:33.831691 kubelet[2962]: E0114 01:24:33.830000 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:24:33.831691 kubelet[2962]: E0114 01:24:33.830071 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:24:33.831691 kubelet[2962]: E0114 01:24:33.830637 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:33.833823 kubelet[2962]: E0114 01:24:33.833258 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:34.077059 systemd-networkd[1555]: cali39d959a5ebe: Link UP Jan 14 01:24:34.078326 systemd-networkd[1555]: cali39d959a5ebe: Gained carrier Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:33.942 [INFO][4777] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0 goldmane-666569f655- calico-system 56181817-3b69-45cb-ad6c-2ef729a912ab 854 0 2026-01-14 01:23:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com goldmane-666569f655-7pn2c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali39d959a5ebe [] [] }} ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:33.942 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.012 [INFO][4825] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" HandleID="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Workload="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.013 [INFO][4825] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" HandleID="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Workload="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003324a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"goldmane-666569f655-7pn2c", "timestamp":"2026-01-14 01:24:34.012863147 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.014 [INFO][4825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.014 [INFO][4825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.014 [INFO][4825] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.029 [INFO][4825] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.036 [INFO][4825] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.045 [INFO][4825] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.048 [INFO][4825] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.051 [INFO][4825] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.051 [INFO][4825] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.053 [INFO][4825] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.058 [INFO][4825] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.068 [INFO][4825] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.68/26] block=192.168.126.64/26 handle="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.069 [INFO][4825] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.68/26] handle="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.069 [INFO][4825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:34.111176 containerd[1645]: 2026-01-14 01:24:34.069 [INFO][4825] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.68/26] IPv6=[] ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" HandleID="k8s-pod-network.dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Workload="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.073 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"56181817-3b69-45cb-ad6c-2ef729a912ab", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-7pn2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39d959a5ebe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.073 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.68/32] ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.073 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39d959a5ebe ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.079 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.080 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"56181817-3b69-45cb-ad6c-2ef729a912ab", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc", Pod:"goldmane-666569f655-7pn2c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.126.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali39d959a5ebe", MAC:"ea:61:34:37:1b:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.112932 containerd[1645]: 2026-01-14 01:24:34.098 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" Namespace="calico-system" Pod="goldmane-666569f655-7pn2c" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-goldmane--666569f655--7pn2c-eth0" Jan 14 01:24:34.188000 audit[4855]: NETFILTER_CFG table=filter:133 family=2 entries=52 op=nft_register_chain pid=4855 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:34.188000 audit[4855]: SYSCALL arch=c000003e syscall=46 success=yes exit=27556 a0=3 a1=7ffd72682f90 a2=0 a3=7ffd72682f7c items=0 ppid=4283 pid=4855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.188000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:34.193400 containerd[1645]: time="2026-01-14T01:24:34.193157007Z" level=info msg="connecting to shim dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc" address="unix:///run/containerd/s/c5e78d86e98be251a00de839c0cc572312859ee9b67a2b4d71ddd2ab187b3957" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:34.237952 kubelet[2962]: E0114 01:24:34.237555 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:34.253366 systemd-networkd[1555]: cali4d93f571afa: Link UP Jan 14 01:24:34.255545 systemd-networkd[1555]: cali4d93f571afa: Gained carrier Jan 14 01:24:34.279241 systemd[1]: Started cri-containerd-dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc.scope - libcontainer container dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc. Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:33.934 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0 coredns-668d6bf9bc- kube-system 26746a71-3f83-4a0a-9dda-605fbba4fc61 845 0 2026-01-14 01:23:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com coredns-668d6bf9bc-p8d54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4d93f571afa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:33.934 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.022 [INFO][4823] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" HandleID="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.023 [INFO][4823] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" HandleID="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a0c30), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-p8d54", "timestamp":"2026-01-14 01:24:34.022802474 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.024 [INFO][4823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.069 [INFO][4823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.069 [INFO][4823] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.131 [INFO][4823] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.151 [INFO][4823] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.165 [INFO][4823] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.170 [INFO][4823] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.179 [INFO][4823] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.181 [INFO][4823] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.187 [INFO][4823] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2 Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.202 [INFO][4823] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.230 [INFO][4823] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.69/26] block=192.168.126.64/26 handle="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.231 [INFO][4823] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.69/26] handle="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.232 [INFO][4823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:34.319995 containerd[1645]: 2026-01-14 01:24:34.233 [INFO][4823] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.69/26] IPv6=[] ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" HandleID="k8s-pod-network.2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Workload="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.248 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26746a71-3f83-4a0a-9dda-605fbba4fc61", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-p8d54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d93f571afa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.248 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.69/32] ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.248 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d93f571afa ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.256 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.257 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26746a71-3f83-4a0a-9dda-605fbba4fc61", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2", Pod:"coredns-668d6bf9bc-p8d54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4d93f571afa", MAC:"3a:41:6a:b2:8b:67", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.322665 containerd[1645]: 2026-01-14 01:24:34.308 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-p8d54" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-coredns--668d6bf9bc--p8d54-eth0" Jan 14 01:24:34.373000 audit: BPF prog-id=230 op=LOAD Jan 14 01:24:34.377000 audit: BPF prog-id=231 op=LOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=231 op=UNLOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=232 op=LOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=233 op=LOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.377000 audit: BPF prog-id=234 op=LOAD Jan 14 01:24:34.377000 audit[4873]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4861 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.377000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462663062356538643139636366643331356235663064663139383562 Jan 14 01:24:34.397912 containerd[1645]: time="2026-01-14T01:24:34.397723934Z" level=info msg="connecting to shim 2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2" address="unix:///run/containerd/s/8519090f6a0cb931300e37ce10f33646f52db2579ecddb1b9d434cf544c418a4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:34.408220 systemd-networkd[1555]: cali5261baf7d24: Link UP Jan 14 01:24:34.409880 systemd-networkd[1555]: cali5261baf7d24: Gained carrier Jan 14 01:24:34.436000 audit[4917]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=4917 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:34.436000 audit[4917]: SYSCALL arch=c000003e syscall=46 success=yes exit=21532 a0=3 a1=7ffc4347eb20 a2=0 a3=7ffc4347eb0c items=0 ppid=4283 pid=4917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:33.916 [INFO][4787] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0 calico-apiserver-84b8b5c58c- calico-apiserver 611c348f-b209-4156-bf37-8d53c837267b 853 0 2026-01-14 01:23:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b8b5c58c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com calico-apiserver-84b8b5c58c-hcr5n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5261baf7d24 [] [] }} ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:33.917 [INFO][4787] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.036 [INFO][4818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" HandleID="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.036 [INFO][4818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" HandleID="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251660), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-aufav.gb1.brightbox.com", "pod":"calico-apiserver-84b8b5c58c-hcr5n", "timestamp":"2026-01-14 01:24:34.036459153 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.037 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.233 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.237 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.294 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.318 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.333 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.339 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.345 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.345 [INFO][4818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.352 [INFO][4818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.370 [INFO][4818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.388 [INFO][4818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.70/26] block=192.168.126.64/26 handle="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.388 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.70/26] handle="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.388 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:34.459362 containerd[1645]: 2026-01-14 01:24:34.388 [INFO][4818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.70/26] IPv6=[] ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" HandleID="k8s-pod-network.f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.401 [INFO][4787] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0", GenerateName:"calico-apiserver-84b8b5c58c-", Namespace:"calico-apiserver", SelfLink:"", UID:"611c348f-b209-4156-bf37-8d53c837267b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b8b5c58c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-84b8b5c58c-hcr5n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5261baf7d24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.401 [INFO][4787] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.70/32] ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.401 [INFO][4787] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5261baf7d24 ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.411 [INFO][4787] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.414 [INFO][4787] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0", GenerateName:"calico-apiserver-84b8b5c58c-", Namespace:"calico-apiserver", SelfLink:"", UID:"611c348f-b209-4156-bf37-8d53c837267b", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b8b5c58c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b", Pod:"calico-apiserver-84b8b5c58c-hcr5n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5261baf7d24", MAC:"3e:4c:21:5d:d7:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:34.461289 containerd[1645]: 2026-01-14 01:24:34.452 [INFO][4787] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-hcr5n" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--hcr5n-eth0" Jan 14 01:24:34.510968 systemd[1]: Started cri-containerd-2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2.scope - libcontainer container 2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2. Jan 14 01:24:34.540671 containerd[1645]: time="2026-01-14T01:24:34.540524911Z" level=info msg="connecting to shim f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b" address="unix:///run/containerd/s/101374e52f94521fe7ee3f774ef7372a00d0938e05e72a42b2f8d487e3667071" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:34.560000 audit: BPF prog-id=235 op=LOAD Jan 14 01:24:34.564000 audit: BPF prog-id=236 op=LOAD Jan 14 01:24:34.564000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.565000 audit: BPF prog-id=236 op=UNLOAD Jan 14 01:24:34.565000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.565000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.566000 audit: BPF prog-id=237 op=LOAD Jan 14 01:24:34.566000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.566000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.567000 audit: BPF prog-id=238 op=LOAD Jan 14 01:24:34.567000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.567000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:24:34.567000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.570000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:24:34.570000 audit[4922]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.570000 audit: BPF prog-id=239 op=LOAD Jan 14 01:24:34.570000 audit[4922]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4910 pid=4922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373636373439353538653034316264623631326432616132336436 Jan 14 01:24:34.574883 containerd[1645]: time="2026-01-14T01:24:34.574754737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pn2c,Uid:56181817-3b69-45cb-ad6c-2ef729a912ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbf0b5e8d19ccfd315b5f0df1985b0bd7391db936811dc484c7732d4257a63bc\"" Jan 14 01:24:34.580412 containerd[1645]: time="2026-01-14T01:24:34.580381075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:24:34.590000 audit[4972]: NETFILTER_CFG table=filter:135 family=2 entries=66 op=nft_register_chain pid=4972 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:34.590000 audit[4972]: SYSCALL arch=c000003e syscall=46 success=yes exit=32960 a0=3 a1=7ffcdeb4e530 a2=0 a3=7ffcdeb4e51c items=0 ppid=4283 pid=4972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.590000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:34.619089 systemd[1]: Started cri-containerd-f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b.scope - libcontainer container f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b. Jan 14 01:24:34.651259 containerd[1645]: time="2026-01-14T01:24:34.651203531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p8d54,Uid:26746a71-3f83-4a0a-9dda-605fbba4fc61,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2\"" Jan 14 01:24:34.654000 audit: BPF prog-id=240 op=LOAD Jan 14 01:24:34.656000 audit: BPF prog-id=241 op=LOAD Jan 14 01:24:34.656000 audit[4975]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.656000 audit: BPF prog-id=241 op=UNLOAD Jan 14 01:24:34.656000 audit[4975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.656000 audit: BPF prog-id=242 op=LOAD Jan 14 01:24:34.656000 audit[4975]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.656000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.657000 audit: BPF prog-id=243 op=LOAD Jan 14 01:24:34.657000 audit[4975]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.657000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:24:34.657000 audit[4975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.657000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:24:34.657000 audit[4975]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.657000 audit: BPF prog-id=244 op=LOAD Jan 14 01:24:34.657000 audit[4975]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4962 pid=4975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6636396435383335646366623437636536623331363732393162343831 Jan 14 01:24:34.663399 containerd[1645]: time="2026-01-14T01:24:34.663277067Z" level=info msg="CreateContainer within sandbox \"2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:24:34.677344 systemd-networkd[1555]: calid750b2b8a09: Gained IPv6LL Jan 14 01:24:34.682846 containerd[1645]: time="2026-01-14T01:24:34.682780157Z" level=info msg="Container 23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:24:34.689485 containerd[1645]: time="2026-01-14T01:24:34.689447898Z" level=info msg="CreateContainer within sandbox \"2b766749558e041bdb612d2aa23d6f88e670e1d82f1dff4ee0ae40c1069e24c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d\"" Jan 14 01:24:34.690896 containerd[1645]: time="2026-01-14T01:24:34.690862958Z" level=info msg="StartContainer for \"23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d\"" Jan 14 01:24:34.694189 containerd[1645]: time="2026-01-14T01:24:34.694020813Z" level=info msg="connecting to shim 23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d" address="unix:///run/containerd/s/8519090f6a0cb931300e37ce10f33646f52db2579ecddb1b9d434cf544c418a4" protocol=ttrpc version=3 Jan 14 01:24:34.730066 systemd[1]: Started cri-containerd-23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d.scope - libcontainer container 23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d. Jan 14 01:24:34.740725 containerd[1645]: time="2026-01-14T01:24:34.740637668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-hcr5n,Uid:611c348f-b209-4156-bf37-8d53c837267b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f69d5835dcfb47ce6b3167291b481f1b6cdc67aad4773dd9e9d1f4dc7cc8658b\"" Jan 14 01:24:34.755000 audit: BPF prog-id=245 op=LOAD Jan 14 01:24:34.756000 audit: BPF prog-id=246 op=LOAD Jan 14 01:24:34.756000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.756000 audit: BPF prog-id=246 op=UNLOAD Jan 14 01:24:34.756000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.756000 audit: BPF prog-id=247 op=LOAD Jan 14 01:24:34.756000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.756000 audit: BPF prog-id=248 op=LOAD Jan 14 01:24:34.756000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.757000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:24:34.757000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.757000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:24:34.757000 audit[4999]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.757000 audit: BPF prog-id=249 op=LOAD Jan 14 01:24:34.757000 audit[4999]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4910 pid=4999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:34.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233646263333332613935383230616133646630663138396331643631 Jan 14 01:24:34.786450 containerd[1645]: time="2026-01-14T01:24:34.786068965Z" level=info msg="StartContainer for \"23dbc332a95820aa3df0f189c1d61156cd1fa7c766156396edd0cf3b0817565d\" returns successfully" Jan 14 01:24:34.794452 containerd[1645]: time="2026-01-14T01:24:34.794388781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:24:34.918645 containerd[1645]: time="2026-01-14T01:24:34.918490513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:34.920754 containerd[1645]: time="2026-01-14T01:24:34.920651851Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:24:34.921613 containerd[1645]: time="2026-01-14T01:24:34.920809234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:34.921765 kubelet[2962]: E0114 01:24:34.921232 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:24:34.921765 kubelet[2962]: E0114 01:24:34.921305 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:24:34.921765 kubelet[2962]: E0114 01:24:34.921671 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk45d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:34.923256 kubelet[2962]: E0114 01:24:34.922942 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:34.923434 containerd[1645]: time="2026-01-14T01:24:34.922668694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:24:34.992412 systemd-networkd[1555]: cali679c5f5f305: Link UP Jan 14 01:24:34.994307 systemd-networkd[1555]: cali679c5f5f305: Gained carrier Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.863 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0 calico-apiserver-84b8b5c58c- calico-apiserver 4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1 855 0 2026-01-14 01:23:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84b8b5c58c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com calico-apiserver-84b8b5c58c-zdm6z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali679c5f5f305 [] [] }} ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.864 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.920 [INFO][5049] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" HandleID="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.920 [INFO][5049] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" HandleID="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf8b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-aufav.gb1.brightbox.com", "pod":"calico-apiserver-84b8b5c58c-zdm6z", "timestamp":"2026-01-14 01:24:34.920116819 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.920 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.920 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.920 [INFO][5049] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.935 [INFO][5049] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.943 [INFO][5049] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.949 [INFO][5049] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.954 [INFO][5049] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.959 [INFO][5049] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.959 [INFO][5049] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.962 [INFO][5049] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903 Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.968 [INFO][5049] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.979 [INFO][5049] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.71/26] block=192.168.126.64/26 handle="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.979 [INFO][5049] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.71/26] handle="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.979 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:35.024268 containerd[1645]: 2026-01-14 01:24:34.979 [INFO][5049] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.71/26] IPv6=[] ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" HandleID="k8s-pod-network.4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:34.983 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0", GenerateName:"calico-apiserver-84b8b5c58c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b8b5c58c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-84b8b5c58c-zdm6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali679c5f5f305", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:34.983 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.71/32] ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:34.984 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali679c5f5f305 ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:34.994 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:34.995 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0", GenerateName:"calico-apiserver-84b8b5c58c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84b8b5c58c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903", Pod:"calico-apiserver-84b8b5c58c-zdm6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali679c5f5f305", MAC:"a2:a7:a5:02:98:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:35.026702 containerd[1645]: 2026-01-14 01:24:35.013 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" Namespace="calico-apiserver" Pod="calico-apiserver-84b8b5c58c-zdm6z" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--apiserver--84b8b5c58c--zdm6z-eth0" Jan 14 01:24:35.074000 audit[5063]: NETFILTER_CFG table=filter:136 family=2 entries=63 op=nft_register_chain pid=5063 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:35.074000 audit[5063]: SYSCALL arch=c000003e syscall=46 success=yes exit=30680 a0=3 a1=7ffd0a3ac030 a2=0 a3=7ffd0a3ac01c items=0 ppid=4283 pid=5063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.074000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:35.080535 containerd[1645]: time="2026-01-14T01:24:35.080473471Z" level=info msg="connecting to shim 4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903" address="unix:///run/containerd/s/e2d47546735edee30a122ce18d7e40c5e27f78a3e591730a3ff5c3245fb864b9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:35.126212 systemd[1]: Started cri-containerd-4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903.scope - libcontainer container 4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903. Jan 14 01:24:35.143000 audit: BPF prog-id=250 op=LOAD Jan 14 01:24:35.143000 audit: BPF prog-id=251 op=LOAD Jan 14 01:24:35.143000 audit[5083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=251 op=UNLOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=252 op=LOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=253 op=LOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.144000 audit: BPF prog-id=254 op=LOAD Jan 14 01:24:35.144000 audit[5083]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=5072 pid=5083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462633332313439346264333439323936613264353938663432343363 Jan 14 01:24:35.202187 containerd[1645]: time="2026-01-14T01:24:35.200583157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84b8b5c58c-zdm6z,Uid:4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"4bc321494bd349296a2d598f4243cf3837d2ed85d42a5728befd7c1ae09e0903\"" Jan 14 01:24:35.231203 containerd[1645]: time="2026-01-14T01:24:35.231046209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:35.233027 containerd[1645]: time="2026-01-14T01:24:35.232856388Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:24:35.233027 containerd[1645]: time="2026-01-14T01:24:35.232941424Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:35.233498 kubelet[2962]: E0114 01:24:35.233079 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:35.233498 kubelet[2962]: E0114 01:24:35.233126 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:35.233498 kubelet[2962]: E0114 01:24:35.233344 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:35.233799 containerd[1645]: time="2026-01-14T01:24:35.233722539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:24:35.235057 kubelet[2962]: E0114 01:24:35.235010 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:35.240318 kubelet[2962]: E0114 01:24:35.240266 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:35.245408 kubelet[2962]: E0114 01:24:35.245373 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:35.253486 kubelet[2962]: I0114 01:24:35.252254 2962 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p8d54" podStartSLOduration=65.252236803 podStartE2EDuration="1m5.252236803s" podCreationTimestamp="2026-01-14 01:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:24:35.250471544 +0000 UTC m=+69.790329554" watchObservedRunningTime="2026-01-14 01:24:35.252236803 +0000 UTC m=+69.792094796" Jan 14 01:24:35.351000 audit[5113]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5113 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:35.351000 audit[5113]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd94679ad0 a2=0 a3=7ffd94679abc items=0 ppid=3108 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:35.359000 audit[5113]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=5113 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:35.359000 audit[5113]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd94679ad0 a2=0 a3=7ffd94679abc items=0 ppid=3108 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:35.381000 audit[5116]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:35.381000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffdb7d87a80 a2=0 a3=7ffdb7d87a6c items=0 ppid=3108 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.381000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:35.397000 audit[5116]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=5116 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:35.397000 audit[5116]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffdb7d87a80 a2=0 a3=7ffdb7d87a6c items=0 ppid=3108 pid=5116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:35.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:35.561765 containerd[1645]: time="2026-01-14T01:24:35.561597935Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:35.563063 containerd[1645]: time="2026-01-14T01:24:35.563012345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:24:35.563261 containerd[1645]: time="2026-01-14T01:24:35.563042362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:35.563315 kubelet[2962]: E0114 01:24:35.563242 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:35.563315 kubelet[2962]: E0114 01:24:35.563293 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:35.563669 kubelet[2962]: E0114 01:24:35.563484 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sthtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:35.565276 kubelet[2962]: E0114 01:24:35.565191 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:24:35.572034 systemd-networkd[1555]: cali4d93f571afa: Gained IPv6LL Jan 14 01:24:35.572899 systemd-networkd[1555]: cali5261baf7d24: Gained IPv6LL Jan 14 01:24:35.699996 systemd-networkd[1555]: cali39d959a5ebe: Gained IPv6LL Jan 14 01:24:35.798016 containerd[1645]: time="2026-01-14T01:24:35.797943119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,}" Jan 14 01:24:35.967510 systemd-networkd[1555]: caliacbfc334431: Link UP Jan 14 01:24:35.969349 systemd-networkd[1555]: caliacbfc334431: Gained carrier Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.857 [INFO][5119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0 calico-kube-controllers-85cd4c88bf- calico-system 6af88ae8-33db-47f6-8963-68a47a1d9783 852 0 2026-01-14 01:23:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cd4c88bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-aufav.gb1.brightbox.com calico-kube-controllers-85cd4c88bf-rxm94 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliacbfc334431 [] [] }} ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.858 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.905 [INFO][5130] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" HandleID="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.906 [INFO][5130] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" HandleID="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-aufav.gb1.brightbox.com", "pod":"calico-kube-controllers-85cd4c88bf-rxm94", "timestamp":"2026-01-14 01:24:35.905765967 +0000 UTC"}, Hostname:"srv-aufav.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.906 [INFO][5130] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.906 [INFO][5130] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.906 [INFO][5130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-aufav.gb1.brightbox.com' Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.916 [INFO][5130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.923 [INFO][5130] ipam/ipam.go 394: Looking up existing affinities for host host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.932 [INFO][5130] ipam/ipam.go 511: Trying affinity for 192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.935 [INFO][5130] ipam/ipam.go 158: Attempting to load block cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.939 [INFO][5130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.126.64/26 host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.939 [INFO][5130] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.126.64/26 handle="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.943 [INFO][5130] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.949 [INFO][5130] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.126.64/26 handle="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.958 [INFO][5130] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.126.72/26] block=192.168.126.64/26 handle="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.958 [INFO][5130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.126.72/26] handle="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" host="srv-aufav.gb1.brightbox.com" Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.958 [INFO][5130] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:24:35.989922 containerd[1645]: 2026-01-14 01:24:35.959 [INFO][5130] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.126.72/26] IPv6=[] ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" HandleID="k8s-pod-network.b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Workload="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.961 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0", GenerateName:"calico-kube-controllers-85cd4c88bf-", Namespace:"calico-system", SelfLink:"", UID:"6af88ae8-33db-47f6-8963-68a47a1d9783", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cd4c88bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-85cd4c88bf-rxm94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacbfc334431", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.962 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.126.72/32] ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.962 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacbfc334431 ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.965 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.966 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0", GenerateName:"calico-kube-controllers-85cd4c88bf-", Namespace:"calico-system", SelfLink:"", UID:"6af88ae8-33db-47f6-8963-68a47a1d9783", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 23, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cd4c88bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-aufav.gb1.brightbox.com", ContainerID:"b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d", Pod:"calico-kube-controllers-85cd4c88bf-rxm94", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliacbfc334431", MAC:"92:bd:54:13:2f:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:24:35.993160 containerd[1645]: 2026-01-14 01:24:35.981 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" Namespace="calico-system" Pod="calico-kube-controllers-85cd4c88bf-rxm94" WorkloadEndpoint="srv--aufav.gb1.brightbox.com-k8s-calico--kube--controllers--85cd4c88bf--rxm94-eth0" Jan 14 01:24:36.026000 audit[5145]: NETFILTER_CFG table=filter:141 family=2 entries=56 op=nft_register_chain pid=5145 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:24:36.026000 audit[5145]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7ffe98684fc0 a2=0 a3=7ffe98684fac items=0 ppid=4283 pid=5145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.026000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:24:36.038209 containerd[1645]: time="2026-01-14T01:24:36.038034567Z" level=info msg="connecting to shim b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d" address="unix:///run/containerd/s/a0d2913bbb24d39e7548000707a59b6a48f8c358cfda6ad96f4150cf7c93e759" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:24:36.079059 systemd[1]: Started cri-containerd-b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d.scope - libcontainer container b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d. Jan 14 01:24:36.083994 systemd-networkd[1555]: cali679c5f5f305: Gained IPv6LL Jan 14 01:24:36.100000 audit: BPF prog-id=255 op=LOAD Jan 14 01:24:36.101000 audit: BPF prog-id=256 op=LOAD Jan 14 01:24:36.101000 audit[5165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.101000 audit: BPF prog-id=256 op=UNLOAD Jan 14 01:24:36.101000 audit[5165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.101000 audit: BPF prog-id=257 op=LOAD Jan 14 01:24:36.101000 audit[5165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.102000 audit: BPF prog-id=258 op=LOAD Jan 14 01:24:36.102000 audit[5165]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.102000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:24:36.102000 audit[5165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.103000 audit: BPF prog-id=257 op=UNLOAD Jan 14 01:24:36.103000 audit[5165]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.103000 audit: BPF prog-id=259 op=LOAD Jan 14 01:24:36.103000 audit[5165]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5154 pid=5165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646165386539623730636337636130373864643233656137346131 Jan 14 01:24:36.158031 containerd[1645]: time="2026-01-14T01:24:36.157976713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cd4c88bf-rxm94,Uid:6af88ae8-33db-47f6-8963-68a47a1d9783,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7dae8e9b70cc7ca078dd23ea74a1dd70f74782877a08c2c1a96645166bccd7d\"" Jan 14 01:24:36.161052 containerd[1645]: time="2026-01-14T01:24:36.160864439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:24:36.254150 kubelet[2962]: E0114 01:24:36.253577 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:36.256520 kubelet[2962]: E0114 01:24:36.254072 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:36.256520 kubelet[2962]: E0114 01:24:36.254581 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:24:36.326000 audit[5192]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:36.326000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe8fc6cde0 a2=0 a3=7ffe8fc6cdcc items=0 ppid=3108 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:36.333000 audit[5192]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5192 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:24:36.333000 audit[5192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe8fc6cde0 a2=0 a3=7ffe8fc6cdcc items=0 ppid=3108 pid=5192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:36.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:24:36.472199 containerd[1645]: time="2026-01-14T01:24:36.472073477Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:36.473630 containerd[1645]: time="2026-01-14T01:24:36.473559534Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:24:36.474576 containerd[1645]: time="2026-01-14T01:24:36.473699950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:36.474691 kubelet[2962]: E0114 01:24:36.473999 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:24:36.474691 kubelet[2962]: E0114 01:24:36.474062 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:24:36.474691 kubelet[2962]: E0114 01:24:36.474341 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:36.475954 kubelet[2962]: E0114 01:24:36.475665 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:24:37.255135 kubelet[2962]: E0114 01:24:37.254925 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:24:37.797621 containerd[1645]: time="2026-01-14T01:24:37.797548825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:24:38.004307 systemd-networkd[1555]: caliacbfc334431: Gained IPv6LL Jan 14 01:24:38.117480 containerd[1645]: time="2026-01-14T01:24:38.117170888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:38.118685 containerd[1645]: time="2026-01-14T01:24:38.118546095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:24:38.118821 containerd[1645]: time="2026-01-14T01:24:38.118624812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:38.119008 kubelet[2962]: E0114 01:24:38.118959 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:24:38.119099 kubelet[2962]: E0114 01:24:38.119037 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:24:38.119718 kubelet[2962]: E0114 01:24:38.119220 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:419b9865beaf4f708afcaeed39d95459,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:38.122018 containerd[1645]: time="2026-01-14T01:24:38.121914726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:24:38.423203 containerd[1645]: time="2026-01-14T01:24:38.422948372Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:38.424268 containerd[1645]: time="2026-01-14T01:24:38.424133014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:24:38.424268 containerd[1645]: time="2026-01-14T01:24:38.424247341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:38.425005 kubelet[2962]: E0114 01:24:38.424612 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:24:38.425005 kubelet[2962]: E0114 01:24:38.424695 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:24:38.425005 kubelet[2962]: E0114 01:24:38.424914 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:38.426815 kubelet[2962]: E0114 01:24:38.426705 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:24:43.406370 update_engine[1621]: I20260114 01:24:43.404855 1621 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:24:43.406370 update_engine[1621]: I20260114 01:24:43.404999 1621 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:24:43.406370 update_engine[1621]: I20260114 01:24:43.405589 1621 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:24:43.407973 update_engine[1621]: E20260114 01:24:43.407908 1621 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Jan 14 01:24:43.408326 update_engine[1621]: I20260114 01:24:43.408221 1621 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 01:24:48.796994 containerd[1645]: time="2026-01-14T01:24:48.796919805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:24:49.111049 containerd[1645]: time="2026-01-14T01:24:49.110582898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:49.112988 containerd[1645]: time="2026-01-14T01:24:49.112901611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:24:49.113210 containerd[1645]: time="2026-01-14T01:24:49.112953414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:49.116033 kubelet[2962]: E0114 01:24:49.115960 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:49.116585 kubelet[2962]: E0114 01:24:49.116054 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:49.118293 kubelet[2962]: E0114 01:24:49.117149 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:49.118760 kubelet[2962]: E0114 01:24:49.118723 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:24:49.119584 containerd[1645]: time="2026-01-14T01:24:49.119099430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:24:49.437340 containerd[1645]: time="2026-01-14T01:24:49.437198377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:49.440002 containerd[1645]: time="2026-01-14T01:24:49.439937658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:24:49.440294 containerd[1645]: time="2026-01-14T01:24:49.440109236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:49.440577 kubelet[2962]: E0114 01:24:49.440501 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:24:49.440809 kubelet[2962]: E0114 01:24:49.440760 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:24:49.441235 kubelet[2962]: E0114 01:24:49.441124 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:49.445745 containerd[1645]: time="2026-01-14T01:24:49.445637225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:24:49.755408 containerd[1645]: time="2026-01-14T01:24:49.755164201Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:49.757490 containerd[1645]: time="2026-01-14T01:24:49.757269498Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:24:49.757490 containerd[1645]: time="2026-01-14T01:24:49.757419094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:49.757759 kubelet[2962]: E0114 01:24:49.757660 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:24:49.757872 kubelet[2962]: E0114 01:24:49.757771 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:24:49.760036 kubelet[2962]: E0114 01:24:49.757989 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:49.760360 kubelet[2962]: E0114 01:24:49.760316 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:24:49.799919 containerd[1645]: time="2026-01-14T01:24:49.797968772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:24:50.114961 containerd[1645]: time="2026-01-14T01:24:50.113644672Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:50.117031 containerd[1645]: time="2026-01-14T01:24:50.116842508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:24:50.117031 containerd[1645]: time="2026-01-14T01:24:50.116980927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:50.118837 kubelet[2962]: E0114 01:24:50.117354 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:24:50.118837 kubelet[2962]: E0114 01:24:50.117438 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:24:50.118837 kubelet[2962]: E0114 01:24:50.117675 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk45d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:50.120911 kubelet[2962]: E0114 01:24:50.120867 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:24:51.802862 containerd[1645]: time="2026-01-14T01:24:51.802158409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:24:52.119595 containerd[1645]: time="2026-01-14T01:24:52.119282582Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:52.121824 containerd[1645]: time="2026-01-14T01:24:52.121595522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:24:52.121824 containerd[1645]: time="2026-01-14T01:24:52.121671578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:52.122121 kubelet[2962]: E0114 01:24:52.122030 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:24:52.124160 kubelet[2962]: E0114 01:24:52.122125 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:24:52.124160 kubelet[2962]: E0114 01:24:52.122510 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:52.124160 kubelet[2962]: E0114 01:24:52.123948 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:24:52.125175 containerd[1645]: time="2026-01-14T01:24:52.123168721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:24:52.437549 containerd[1645]: time="2026-01-14T01:24:52.437475670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:24:52.438763 containerd[1645]: time="2026-01-14T01:24:52.438666145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:24:52.439069 containerd[1645]: time="2026-01-14T01:24:52.438730452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:24:52.439279 kubelet[2962]: E0114 01:24:52.439171 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:52.439369 kubelet[2962]: E0114 01:24:52.439316 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:24:52.440332 kubelet[2962]: E0114 01:24:52.440216 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sthtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:24:52.441555 kubelet[2962]: E0114 01:24:52.441518 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:24:52.798540 kubelet[2962]: E0114 01:24:52.797953 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:24:53.401555 update_engine[1621]: I20260114 01:24:53.401401 1621 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:24:53.403239 update_engine[1621]: I20260114 01:24:53.402428 1621 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:24:53.403239 update_engine[1621]: I20260114 01:24:53.403177 1621 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:24:53.403762 update_engine[1621]: E20260114 01:24:53.403728 1621 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Jan 14 01:24:53.403986 update_engine[1621]: I20260114 01:24:53.403954 1621 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 01:24:59.008163 systemd[1]: Started sshd@9-10.230.32.214:22-68.220.241.50:38786.service - OpenSSH per-connection server daemon (68.220.241.50:38786). Jan 14 01:24:59.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.32.214:22-68.220.241.50:38786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:59.015075 kernel: kauditd_printk_skb: 239 callbacks suppressed Jan 14 01:24:59.015177 kernel: audit: type=1130 audit(1768353899.008:746): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.32.214:22-68.220.241.50:38786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:24:59.652578 sshd[5237]: Accepted publickey for core from 68.220.241.50 port 38786 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:24:59.651000 audit[5237]: USER_ACCT pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.665339 kernel: audit: type=1101 audit(1768353899.651:747): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.667878 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:24:59.663000 audit[5237]: CRED_ACQ pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.680520 kernel: audit: type=1103 audit(1768353899.663:748): pid=5237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.689592 kernel: audit: type=1006 audit(1768353899.663:749): pid=5237 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 01:24:59.663000 audit[5237]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff65194aa0 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:59.693580 systemd-logind[1619]: New session 13 of user core. Jan 14 01:24:59.699172 kernel: audit: type=1300 audit(1768353899.663:749): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff65194aa0 a2=3 a3=0 items=0 ppid=1 pid=5237 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:24:59.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:24:59.703102 kernel: audit: type=1327 audit(1768353899.663:749): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:24:59.704246 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:24:59.714000 audit[5237]: USER_START pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.722813 kernel: audit: type=1105 audit(1768353899.714:750): pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.722000 audit[5242]: CRED_ACQ pid=5242 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:24:59.728812 kernel: audit: type=1103 audit(1768353899.722:751): pid=5242 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:00.694023 sshd[5242]: Connection closed by 68.220.241.50 port 38786 Jan 14 01:25:00.696147 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:00.702000 audit[5237]: USER_END pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:00.715850 kernel: audit: type=1106 audit(1768353900.702:752): pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:00.732411 systemd[1]: sshd@9-10.230.32.214:22-68.220.241.50:38786.service: Deactivated successfully. Jan 14 01:25:00.703000 audit[5237]: CRED_DISP pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:00.741763 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:25:00.744915 kernel: audit: type=1104 audit(1768353900.703:753): pid=5237 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:00.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.32.214:22-68.220.241.50:38786 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:00.751862 systemd-logind[1619]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:25:00.754816 systemd-logind[1619]: Removed session 13. Jan 14 01:25:01.799689 kubelet[2962]: E0114 01:25:01.799558 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:25:01.800971 kubelet[2962]: E0114 01:25:01.800119 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:25:02.796013 kubelet[2962]: E0114 01:25:02.795931 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:25:03.400722 update_engine[1621]: I20260114 01:25:03.400563 1621 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:25:03.401461 update_engine[1621]: I20260114 01:25:03.400828 1621 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:25:03.403082 update_engine[1621]: I20260114 01:25:03.401512 1621 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:25:03.403322 update_engine[1621]: E20260114 01:25:03.403208 1621 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Jan 14 01:25:03.403322 update_engine[1621]: I20260114 01:25:03.403304 1621 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 01:25:03.403457 update_engine[1621]: I20260114 01:25:03.403334 1621 omaha_request_action.cc:617] Omaha request response: Jan 14 01:25:03.403511 update_engine[1621]: E20260114 01:25:03.403476 1621 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 01:25:03.418996 update_engine[1621]: I20260114 01:25:03.418905 1621 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 01:25:03.418996 update_engine[1621]: I20260114 01:25:03.418973 1621 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:25:03.418996 update_engine[1621]: I20260114 01:25:03.418991 1621 update_attempter.cc:306] Processing Done. Jan 14 01:25:03.419302 update_engine[1621]: E20260114 01:25:03.419036 1621 update_attempter.cc:619] Update failed. Jan 14 01:25:03.419302 update_engine[1621]: I20260114 01:25:03.419061 1621 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 01:25:03.419302 update_engine[1621]: I20260114 01:25:03.419073 1621 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 01:25:03.419302 update_engine[1621]: I20260114 01:25:03.419085 1621 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.419517 1621 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.419600 1621 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.419618 1621 omaha_request_action.cc:272] Request: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.419631 1621 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.419700 1621 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 01:25:03.420865 update_engine[1621]: I20260114 01:25:03.420279 1621 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 01:25:03.422174 update_engine[1621]: E20260114 01:25:03.422130 1621 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (DNS server returned answer with no data) Jan 14 01:25:03.422298 update_engine[1621]: I20260114 01:25:03.422230 1621 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 01:25:03.422298 update_engine[1621]: I20260114 01:25:03.422256 1621 omaha_request_action.cc:617] Omaha request response: Jan 14 01:25:03.422298 update_engine[1621]: I20260114 01:25:03.422271 1621 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:25:03.422298 update_engine[1621]: I20260114 01:25:03.422282 1621 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 01:25:03.422298 update_engine[1621]: I20260114 01:25:03.422292 1621 update_attempter.cc:306] Processing Done. Jan 14 01:25:03.422549 update_engine[1621]: I20260114 01:25:03.422303 1621 update_attempter.cc:310] Error event sent. Jan 14 01:25:03.422549 update_engine[1621]: I20260114 01:25:03.422319 1621 update_check_scheduler.cc:74] Next update check in 40m58s Jan 14 01:25:03.423076 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 01:25:03.425212 locksmithd[1666]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 01:25:04.796477 containerd[1645]: time="2026-01-14T01:25:04.796369995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:25:05.107643 containerd[1645]: time="2026-01-14T01:25:05.106849278Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:05.110411 containerd[1645]: time="2026-01-14T01:25:05.110363999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:05.110875 containerd[1645]: time="2026-01-14T01:25:05.110825906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:25:05.111774 kubelet[2962]: E0114 01:25:05.111330 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:25:05.112492 kubelet[2962]: E0114 01:25:05.112417 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:25:05.113867 kubelet[2962]: E0114 01:25:05.112826 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:419b9865beaf4f708afcaeed39d95459,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:05.118402 containerd[1645]: time="2026-01-14T01:25:05.118264179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:25:05.425820 containerd[1645]: time="2026-01-14T01:25:05.425726474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:05.427288 containerd[1645]: time="2026-01-14T01:25:05.427193778Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:25:05.427288 containerd[1645]: time="2026-01-14T01:25:05.427234622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:05.428728 kubelet[2962]: E0114 01:25:05.427808 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:25:05.428728 kubelet[2962]: E0114 01:25:05.427894 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:25:05.428728 kubelet[2962]: E0114 01:25:05.428128 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:05.429919 kubelet[2962]: E0114 01:25:05.429840 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:25:05.821998 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:05.822200 kernel: audit: type=1130 audit(1768353905.804:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.32.214:22-68.220.241.50:56586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:05.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.32.214:22-68.220.241.50:56586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:05.805472 systemd[1]: Started sshd@10-10.230.32.214:22-68.220.241.50:56586.service - OpenSSH per-connection server daemon (68.220.241.50:56586). Jan 14 01:25:06.396000 audit[5265]: USER_ACCT pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.405595 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:06.407613 sshd[5265]: Accepted publickey for core from 68.220.241.50 port 56586 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:06.410586 kernel: audit: type=1101 audit(1768353906.396:756): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.410663 kernel: audit: type=1103 audit(1768353906.403:757): pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.403000 audit[5265]: CRED_ACQ pid=5265 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.418823 kernel: audit: type=1006 audit(1768353906.403:758): pid=5265 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:25:06.419945 kernel: audit: type=1300 audit(1768353906.403:758): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecfd54dc0 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:06.403000 audit[5265]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffecfd54dc0 a2=3 a3=0 items=0 ppid=1 pid=5265 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:06.428189 systemd-logind[1619]: New session 14 of user core. Jan 14 01:25:06.403000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:06.432811 kernel: audit: type=1327 audit(1768353906.403:758): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:06.435614 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:25:06.445000 audit[5265]: USER_START pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.452823 kernel: audit: type=1105 audit(1768353906.445:759): pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.452000 audit[5269]: CRED_ACQ pid=5269 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.458828 kernel: audit: type=1103 audit(1768353906.452:760): pid=5269 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:06.845388 kubelet[2962]: E0114 01:25:06.844253 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:25:07.008812 sshd[5269]: Connection closed by 68.220.241.50 port 56586 Jan 14 01:25:07.009719 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:07.015000 audit[5265]: USER_END pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:07.025578 systemd[1]: sshd@10-10.230.32.214:22-68.220.241.50:56586.service: Deactivated successfully. Jan 14 01:25:07.028829 kernel: audit: type=1106 audit(1768353907.015:761): pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:07.031242 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:25:07.016000 audit[5265]: CRED_DISP pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:07.039820 kernel: audit: type=1104 audit(1768353907.016:762): pid=5265 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:07.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.32.214:22-68.220.241.50:56586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:07.040510 systemd-logind[1619]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:25:07.043513 systemd-logind[1619]: Removed session 14. Jan 14 01:25:07.798711 kubelet[2962]: E0114 01:25:07.798510 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:25:12.130443 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:12.130716 kernel: audit: type=1130 audit(1768353912.118:764): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.32.214:22-68.220.241.50:56596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:12.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.32.214:22-68.220.241.50:56596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:12.120328 systemd[1]: Started sshd@11-10.230.32.214:22-68.220.241.50:56596.service - OpenSSH per-connection server daemon (68.220.241.50:56596). Jan 14 01:25:12.702000 audit[5281]: USER_ACCT pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.711706 sshd[5281]: Accepted publickey for core from 68.220.241.50 port 56596 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:12.712587 kernel: audit: type=1101 audit(1768353912.702:765): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.714000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.723806 kernel: audit: type=1103 audit(1768353912.714:766): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.719819 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:12.727815 kernel: audit: type=1006 audit(1768353912.714:767): pid=5281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 01:25:12.714000 audit[5281]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0fba730 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:12.735861 kernel: audit: type=1300 audit(1768353912.714:767): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffa0fba730 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:12.714000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:12.739242 kernel: audit: type=1327 audit(1768353912.714:767): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:12.747331 systemd-logind[1619]: New session 15 of user core. Jan 14 01:25:12.756091 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:25:12.763000 audit[5281]: USER_START pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.772813 kernel: audit: type=1105 audit(1768353912.763:768): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.771000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.779899 kernel: audit: type=1103 audit(1768353912.771:769): pid=5285 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:12.796863 containerd[1645]: time="2026-01-14T01:25:12.796711895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:25:13.111558 containerd[1645]: time="2026-01-14T01:25:13.111353728Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:13.114982 containerd[1645]: time="2026-01-14T01:25:13.114870232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:25:13.116184 containerd[1645]: time="2026-01-14T01:25:13.114940978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:13.116253 kubelet[2962]: E0114 01:25:13.115318 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:25:13.116253 kubelet[2962]: E0114 01:25:13.115403 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:25:13.116253 kubelet[2962]: E0114 01:25:13.115655 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk45d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:13.117210 kubelet[2962]: E0114 01:25:13.117165 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:25:13.132998 sshd[5285]: Connection closed by 68.220.241.50 port 56596 Jan 14 01:25:13.136059 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:13.147808 kernel: audit: type=1106 audit(1768353913.137:770): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:13.137000 audit[5281]: USER_END pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:13.152167 systemd[1]: sshd@11-10.230.32.214:22-68.220.241.50:56596.service: Deactivated successfully. Jan 14 01:25:13.153369 systemd-logind[1619]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:25:13.138000 audit[5281]: CRED_DISP pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:13.162888 kernel: audit: type=1104 audit(1768353913.138:771): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:13.163902 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:25:13.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.32.214:22-68.220.241.50:56596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:13.168323 systemd-logind[1619]: Removed session 15. Jan 14 01:25:13.797754 containerd[1645]: time="2026-01-14T01:25:13.797673736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:25:14.104827 containerd[1645]: time="2026-01-14T01:25:14.104384995Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:14.105640 containerd[1645]: time="2026-01-14T01:25:14.105502023Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:25:14.105640 containerd[1645]: time="2026-01-14T01:25:14.105565893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:14.105977 kubelet[2962]: E0114 01:25:14.105870 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:25:14.105977 kubelet[2962]: E0114 01:25:14.105956 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:25:14.106750 kubelet[2962]: E0114 01:25:14.106199 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:14.115143 containerd[1645]: time="2026-01-14T01:25:14.115097650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:25:14.457458 containerd[1645]: time="2026-01-14T01:25:14.457381907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:14.459431 containerd[1645]: time="2026-01-14T01:25:14.459082072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:25:14.459578 containerd[1645]: time="2026-01-14T01:25:14.459548196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:14.460414 kubelet[2962]: E0114 01:25:14.459795 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:25:14.460414 kubelet[2962]: E0114 01:25:14.459860 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:25:14.460414 kubelet[2962]: E0114 01:25:14.460042 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:14.461902 kubelet[2962]: E0114 01:25:14.461831 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:25:16.797113 kubelet[2962]: E0114 01:25:16.797036 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:25:17.798840 containerd[1645]: time="2026-01-14T01:25:17.797828088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:25:18.104847 containerd[1645]: time="2026-01-14T01:25:18.104573371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:18.105884 containerd[1645]: time="2026-01-14T01:25:18.105823942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:25:18.105990 containerd[1645]: time="2026-01-14T01:25:18.105866269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:18.106247 kubelet[2962]: E0114 01:25:18.106173 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:25:18.106728 kubelet[2962]: E0114 01:25:18.106253 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:25:18.106728 kubelet[2962]: E0114 01:25:18.106430 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:18.108355 kubelet[2962]: E0114 01:25:18.108299 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:25:18.242641 systemd[1]: Started sshd@12-10.230.32.214:22-68.220.241.50:52352.service - OpenSSH per-connection server daemon (68.220.241.50:52352). Jan 14 01:25:18.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.32.214:22-68.220.241.50:52352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:18.245810 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:18.245890 kernel: audit: type=1130 audit(1768353918.242:773): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.32.214:22-68.220.241.50:52352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:18.796000 audit[5300]: USER_ACCT pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.803422 sshd[5300]: Accepted publickey for core from 68.220.241.50 port 52352 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:18.809483 kernel: audit: type=1101 audit(1768353918.796:774): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.813325 containerd[1645]: time="2026-01-14T01:25:18.812617050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:25:18.809000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.814594 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:18.818961 kernel: audit: type=1103 audit(1768353918.809:775): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.828277 kernel: audit: type=1006 audit(1768353918.809:776): pid=5300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 14 01:25:18.809000 audit[5300]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5da62d60 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:18.834620 systemd-logind[1619]: New session 16 of user core. Jan 14 01:25:18.836687 kernel: audit: type=1300 audit(1768353918.809:776): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc5da62d60 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:18.809000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:18.839814 kernel: audit: type=1327 audit(1768353918.809:776): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:18.841093 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:25:18.854000 audit[5300]: USER_START pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.861870 kernel: audit: type=1105 audit(1768353918.854:777): pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.864000 audit[5306]: CRED_ACQ pid=5306 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:18.870814 kernel: audit: type=1103 audit(1768353918.864:778): pid=5306 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.134419 containerd[1645]: time="2026-01-14T01:25:19.134357362Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:19.136921 containerd[1645]: time="2026-01-14T01:25:19.136870005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:25:19.137092 containerd[1645]: time="2026-01-14T01:25:19.136989979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:19.137748 kubelet[2962]: E0114 01:25:19.137319 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:25:19.137748 kubelet[2962]: E0114 01:25:19.137401 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:25:19.139291 kubelet[2962]: E0114 01:25:19.137617 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:19.140780 kubelet[2962]: E0114 01:25:19.140737 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:25:19.209833 sshd[5306]: Connection closed by 68.220.241.50 port 52352 Jan 14 01:25:19.208897 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:19.210000 audit[5300]: USER_END pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.222821 kernel: audit: type=1106 audit(1768353919.210:779): pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.229083 systemd[1]: sshd@12-10.230.32.214:22-68.220.241.50:52352.service: Deactivated successfully. Jan 14 01:25:19.236194 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:25:19.210000 audit[5300]: CRED_DISP pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.247810 kernel: audit: type=1104 audit(1768353919.210:780): pid=5300 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.251014 systemd-logind[1619]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:25:19.256508 systemd-logind[1619]: Removed session 16. Jan 14 01:25:19.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.32.214:22-68.220.241.50:52352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:19.311128 systemd[1]: Started sshd@13-10.230.32.214:22-68.220.241.50:52362.service - OpenSSH per-connection server daemon (68.220.241.50:52362). Jan 14 01:25:19.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.32.214:22-68.220.241.50:52362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:19.841000 audit[5319]: USER_ACCT pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.842588 sshd[5319]: Accepted publickey for core from 68.220.241.50 port 52362 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:19.843000 audit[5319]: CRED_ACQ pid=5319 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.843000 audit[5319]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe271d0e90 a2=3 a3=0 items=0 ppid=1 pid=5319 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:19.843000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:19.845615 sshd-session[5319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:19.860150 systemd-logind[1619]: New session 17 of user core. Jan 14 01:25:19.871124 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:25:19.879000 audit[5319]: USER_START pid=5319 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:19.883000 audit[5323]: CRED_ACQ pid=5323 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.318951 sshd[5323]: Connection closed by 68.220.241.50 port 52362 Jan 14 01:25:20.321115 sshd-session[5319]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:20.324000 audit[5319]: USER_END pid=5319 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.324000 audit[5319]: CRED_DISP pid=5319 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.329273 systemd[1]: sshd@13-10.230.32.214:22-68.220.241.50:52362.service: Deactivated successfully. Jan 14 01:25:20.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.32.214:22-68.220.241.50:52362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:20.333988 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:25:20.336581 systemd-logind[1619]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:25:20.338292 systemd-logind[1619]: Removed session 17. Jan 14 01:25:20.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.32.214:22-68.220.241.50:52364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:20.426867 systemd[1]: Started sshd@14-10.230.32.214:22-68.220.241.50:52364.service - OpenSSH per-connection server daemon (68.220.241.50:52364). Jan 14 01:25:20.798239 containerd[1645]: time="2026-01-14T01:25:20.797301271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:25:20.961000 audit[5333]: USER_ACCT pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.963816 sshd[5333]: Accepted publickey for core from 68.220.241.50 port 52364 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:20.963000 audit[5333]: CRED_ACQ pid=5333 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.963000 audit[5333]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe10b0b290 a2=3 a3=0 items=0 ppid=1 pid=5333 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:20.963000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:20.968021 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:20.978219 systemd-logind[1619]: New session 18 of user core. Jan 14 01:25:20.985065 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:25:20.991000 audit[5333]: USER_START pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:20.996000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:21.112048 containerd[1645]: time="2026-01-14T01:25:21.111856499Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:21.113743 containerd[1645]: time="2026-01-14T01:25:21.113245481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:25:21.113743 containerd[1645]: time="2026-01-14T01:25:21.113366390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:21.114095 kubelet[2962]: E0114 01:25:21.114039 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:25:21.114591 kubelet[2962]: E0114 01:25:21.114111 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:25:21.114591 kubelet[2962]: E0114 01:25:21.114302 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sthtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:21.115483 kubelet[2962]: E0114 01:25:21.115444 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:25:21.366690 sshd[5337]: Connection closed by 68.220.241.50 port 52364 Jan 14 01:25:21.369033 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:21.370000 audit[5333]: USER_END pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:21.370000 audit[5333]: CRED_DISP pid=5333 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:21.375483 systemd[1]: sshd@14-10.230.32.214:22-68.220.241.50:52364.service: Deactivated successfully. Jan 14 01:25:21.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.32.214:22-68.220.241.50:52364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:21.380934 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:25:21.383465 systemd-logind[1619]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:25:21.387661 systemd-logind[1619]: Removed session 18. Jan 14 01:25:24.795299 kubelet[2962]: E0114 01:25:24.794970 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:25:26.486769 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:25:26.487064 kernel: audit: type=1130 audit(1768353926.480:800): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.32.214:22-68.220.241.50:36848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:26.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.32.214:22-68.220.241.50:36848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:26.481130 systemd[1]: Started sshd@15-10.230.32.214:22-68.220.241.50:36848.service - OpenSSH per-connection server daemon (68.220.241.50:36848). Jan 14 01:25:26.799000 kubelet[2962]: E0114 01:25:26.798499 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:25:27.108000 audit[5374]: USER_ACCT pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.118721 sshd[5374]: Accepted publickey for core from 68.220.241.50 port 36848 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:27.119406 kernel: audit: type=1101 audit(1768353927.108:801): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.122000 audit[5374]: CRED_ACQ pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.126298 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:27.128922 kernel: audit: type=1103 audit(1768353927.122:802): pid=5374 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.134842 kernel: audit: type=1006 audit(1768353927.122:803): pid=5374 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:25:27.122000 audit[5374]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc447beed0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:27.147963 kernel: audit: type=1300 audit(1768353927.122:803): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc447beed0 a2=3 a3=0 items=0 ppid=1 pid=5374 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:27.146403 systemd-logind[1619]: New session 19 of user core. Jan 14 01:25:27.122000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:27.151855 kernel: audit: type=1327 audit(1768353927.122:803): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:27.153067 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:25:27.160000 audit[5374]: USER_START pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.167828 kernel: audit: type=1105 audit(1768353927.160:804): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.167000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.174832 kernel: audit: type=1103 audit(1768353927.167:805): pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.563688 sshd[5378]: Connection closed by 68.220.241.50 port 36848 Jan 14 01:25:27.564610 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:27.570000 audit[5374]: USER_END pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.582871 kernel: audit: type=1106 audit(1768353927.570:806): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.585937 systemd[1]: sshd@15-10.230.32.214:22-68.220.241.50:36848.service: Deactivated successfully. Jan 14 01:25:27.570000 audit[5374]: CRED_DISP pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.595610 kernel: audit: type=1104 audit(1768353927.570:807): pid=5374 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:27.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.32.214:22-68.220.241.50:36848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:27.592521 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:25:27.595089 systemd-logind[1619]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:25:27.601477 systemd-logind[1619]: Removed session 19. Jan 14 01:25:28.799811 kubelet[2962]: E0114 01:25:28.799590 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:25:28.802897 kubelet[2962]: E0114 01:25:28.799590 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:25:30.796613 kubelet[2962]: E0114 01:25:30.796478 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:25:32.667198 systemd[1]: Started sshd@16-10.230.32.214:22-68.220.241.50:43242.service - OpenSSH per-connection server daemon (68.220.241.50:43242). Jan 14 01:25:32.676105 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:32.676207 kernel: audit: type=1130 audit(1768353932.666:809): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.32.214:22-68.220.241.50:43242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:32.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.32.214:22-68.220.241.50:43242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:33.194000 audit[5397]: USER_ACCT pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.196605 sshd[5397]: Accepted publickey for core from 68.220.241.50 port 43242 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:33.200829 kernel: audit: type=1101 audit(1768353933.194:810): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.200661 sshd-session[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:33.196000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.213604 kernel: audit: type=1103 audit(1768353933.196:811): pid=5397 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.213724 kernel: audit: type=1006 audit(1768353933.196:812): pid=5397 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:25:33.221081 kernel: audit: type=1300 audit(1768353933.196:812): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe39b34a60 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:33.196000 audit[5397]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe39b34a60 a2=3 a3=0 items=0 ppid=1 pid=5397 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:33.226359 systemd-logind[1619]: New session 20 of user core. Jan 14 01:25:33.196000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:33.231876 kernel: audit: type=1327 audit(1768353933.196:812): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:33.234148 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:25:33.241000 audit[5397]: USER_START pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.251024 kernel: audit: type=1105 audit(1768353933.241:813): pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.251000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.257886 kernel: audit: type=1103 audit(1768353933.251:814): pid=5401 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.630636 sshd[5401]: Connection closed by 68.220.241.50 port 43242 Jan 14 01:25:33.632058 sshd-session[5397]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:33.645748 kernel: audit: type=1106 audit(1768353933.635:815): pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.635000 audit[5397]: USER_END pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.652702 systemd[1]: sshd@16-10.230.32.214:22-68.220.241.50:43242.service: Deactivated successfully. Jan 14 01:25:33.643000 audit[5397]: CRED_DISP pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.663821 kernel: audit: type=1104 audit(1768353933.643:816): pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:33.667339 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:25:33.669853 systemd-logind[1619]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:25:33.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.32.214:22-68.220.241.50:43242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:33.674181 systemd-logind[1619]: Removed session 20. Jan 14 01:25:34.795746 kubelet[2962]: E0114 01:25:34.795232 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:25:37.798125 kubelet[2962]: E0114 01:25:37.797468 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:25:38.735020 systemd[1]: Started sshd@17-10.230.32.214:22-68.220.241.50:43246.service - OpenSSH per-connection server daemon (68.220.241.50:43246). Jan 14 01:25:38.751160 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:38.751300 kernel: audit: type=1130 audit(1768353938.734:818): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.32.214:22-68.220.241.50:43246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:38.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.32.214:22-68.220.241.50:43246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:39.297000 audit[5416]: USER_ACCT pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.308754 kernel: audit: type=1101 audit(1768353939.297:819): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.308955 sshd[5416]: Accepted publickey for core from 68.220.241.50 port 43246 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:39.309000 audit[5416]: CRED_ACQ pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.312942 sshd-session[5416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:39.316126 kernel: audit: type=1103 audit(1768353939.309:820): pid=5416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.325312 kernel: audit: type=1006 audit(1768353939.309:821): pid=5416 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:25:39.325430 kernel: audit: type=1300 audit(1768353939.309:821): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0669c100 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:39.309000 audit[5416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff0669c100 a2=3 a3=0 items=0 ppid=1 pid=5416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:39.332817 kernel: audit: type=1327 audit(1768353939.309:821): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:39.309000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:39.335056 systemd-logind[1619]: New session 21 of user core. Jan 14 01:25:39.341084 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:25:39.350000 audit[5416]: USER_START pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.357953 kernel: audit: type=1105 audit(1768353939.350:822): pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.363838 kernel: audit: type=1103 audit(1768353939.358:823): pid=5420 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.358000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.774356 sshd[5420]: Connection closed by 68.220.241.50 port 43246 Jan 14 01:25:39.777112 sshd-session[5416]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:39.781000 audit[5416]: USER_END pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.793555 kernel: audit: type=1106 audit(1768353939.781:824): pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.798416 systemd[1]: sshd@17-10.230.32.214:22-68.220.241.50:43246.service: Deactivated successfully. Jan 14 01:25:39.807275 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:25:39.810571 kubelet[2962]: E0114 01:25:39.810402 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:25:39.781000 audit[5416]: CRED_DISP pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.820472 kernel: audit: type=1104 audit(1768353939.781:825): pid=5416 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:39.825119 systemd-logind[1619]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:25:39.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.32.214:22-68.220.241.50:43246 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:39.830386 systemd-logind[1619]: Removed session 21. Jan 14 01:25:41.796916 kubelet[2962]: E0114 01:25:41.796520 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:25:41.801244 kubelet[2962]: E0114 01:25:41.799403 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:25:44.895134 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:25:44.895940 kernel: audit: type=1130 audit(1768353944.882:827): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.32.214:22-68.220.241.50:48184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:44.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.32.214:22-68.220.241.50:48184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:44.884209 systemd[1]: Started sshd@18-10.230.32.214:22-68.220.241.50:48184.service - OpenSSH per-connection server daemon (68.220.241.50:48184). Jan 14 01:25:45.445000 audit[5433]: USER_ACCT pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.456690 sshd[5433]: Accepted publickey for core from 68.220.241.50 port 48184 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:45.457464 kernel: audit: type=1101 audit(1768353945.445:828): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.456637 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:45.451000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.470461 kernel: audit: type=1103 audit(1768353945.451:829): pid=5433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.476807 kernel: audit: type=1006 audit(1768353945.451:830): pid=5433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:25:45.476963 kernel: audit: type=1300 audit(1768353945.451:830): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed9f55c40 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:45.451000 audit[5433]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffed9f55c40 a2=3 a3=0 items=0 ppid=1 pid=5433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:45.475116 systemd-logind[1619]: New session 22 of user core. Jan 14 01:25:45.451000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:45.484817 kernel: audit: type=1327 audit(1768353945.451:830): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:45.486053 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:25:45.491000 audit[5433]: USER_START pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.499832 kernel: audit: type=1105 audit(1768353945.491:831): pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.501000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.509845 kernel: audit: type=1103 audit(1768353945.501:832): pid=5443 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.806077 kubelet[2962]: E0114 01:25:45.803606 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:25:45.979015 sshd[5443]: Connection closed by 68.220.241.50 port 48184 Jan 14 01:25:45.979558 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:45.982000 audit[5433]: USER_END pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.996185 kernel: audit: type=1106 audit(1768353945.982:833): pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:45.995259 systemd[1]: sshd@18-10.230.32.214:22-68.220.241.50:48184.service: Deactivated successfully. Jan 14 01:25:45.982000 audit[5433]: CRED_DISP pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:46.001658 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:25:46.005844 kernel: audit: type=1104 audit(1768353945.982:834): pid=5433 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:46.006914 systemd-logind[1619]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:25:45.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.32.214:22-68.220.241.50:48184 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:46.010166 systemd-logind[1619]: Removed session 22. Jan 14 01:25:46.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.32.214:22-68.220.241.50:48188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:46.089211 systemd[1]: Started sshd@19-10.230.32.214:22-68.220.241.50:48188.service - OpenSSH per-connection server daemon (68.220.241.50:48188). Jan 14 01:25:46.606000 audit[5455]: USER_ACCT pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:46.609051 sshd[5455]: Accepted publickey for core from 68.220.241.50 port 48188 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:46.609000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:46.610000 audit[5455]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd70e968f0 a2=3 a3=0 items=0 ppid=1 pid=5455 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:46.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:46.612207 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:46.624657 systemd-logind[1619]: New session 23 of user core. Jan 14 01:25:46.634039 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 01:25:46.642000 audit[5455]: USER_START pid=5455 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:46.645000 audit[5459]: CRED_ACQ pid=5459 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:47.641430 sshd[5459]: Connection closed by 68.220.241.50 port 48188 Jan 14 01:25:47.642251 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:47.650000 audit[5455]: USER_END pid=5455 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:47.650000 audit[5455]: CRED_DISP pid=5455 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:47.654358 systemd[1]: sshd@19-10.230.32.214:22-68.220.241.50:48188.service: Deactivated successfully. Jan 14 01:25:47.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.32.214:22-68.220.241.50:48188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:47.660265 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 01:25:47.665665 systemd-logind[1619]: Session 23 logged out. Waiting for processes to exit. Jan 14 01:25:47.668350 systemd-logind[1619]: Removed session 23. Jan 14 01:25:47.747338 systemd[1]: Started sshd@20-10.230.32.214:22-68.220.241.50:48192.service - OpenSSH per-connection server daemon (68.220.241.50:48192). Jan 14 01:25:47.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.32.214:22-68.220.241.50:48192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:48.303000 audit[5469]: USER_ACCT pid=5469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:48.304650 sshd[5469]: Accepted publickey for core from 68.220.241.50 port 48192 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:48.305000 audit[5469]: CRED_ACQ pid=5469 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:48.306000 audit[5469]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffeb1176990 a2=3 a3=0 items=0 ppid=1 pid=5469 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:48.306000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:48.309112 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:48.320229 systemd-logind[1619]: New session 24 of user core. Jan 14 01:25:48.329027 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 01:25:48.336000 audit[5469]: USER_START pid=5469 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:48.339000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:48.796394 kubelet[2962]: E0114 01:25:48.796328 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:25:49.625260 sshd[5473]: Connection closed by 68.220.241.50 port 48192 Jan 14 01:25:49.625751 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:49.629000 audit[5469]: USER_END pid=5469 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:49.629000 audit[5469]: CRED_DISP pid=5469 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:49.635947 systemd[1]: sshd@20-10.230.32.214:22-68.220.241.50:48192.service: Deactivated successfully. Jan 14 01:25:49.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.32.214:22-68.220.241.50:48192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:49.637353 systemd-logind[1619]: Session 24 logged out. Waiting for processes to exit. Jan 14 01:25:49.642251 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 01:25:49.654971 systemd-logind[1619]: Removed session 24. Jan 14 01:25:49.688000 audit[5485]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:49.688000 audit[5485]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe2133c050 a2=0 a3=7ffe2133c03c items=0 ppid=3108 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:49.688000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:49.694000 audit[5485]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5485 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:49.694000 audit[5485]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe2133c050 a2=0 a3=0 items=0 ppid=3108 pid=5485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:49.694000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:49.717000 audit[5487]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:49.717000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffd29288bf0 a2=0 a3=7ffd29288bdc items=0 ppid=3108 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:49.717000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:49.721000 audit[5487]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5487 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:25:49.721000 audit[5487]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd29288bf0 a2=0 a3=0 items=0 ppid=3108 pid=5487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:49.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:25:49.732694 systemd[1]: Started sshd@21-10.230.32.214:22-68.220.241.50:48196.service - OpenSSH per-connection server daemon (68.220.241.50:48196). Jan 14 01:25:49.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.32.214:22-68.220.241.50:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:50.291000 audit[5489]: USER_ACCT pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.301739 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 01:25:50.302505 kernel: audit: type=1101 audit(1768353950.291:859): pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.302586 sshd[5489]: Accepted publickey for core from 68.220.241.50 port 48196 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:50.302141 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:50.298000 audit[5489]: CRED_ACQ pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.312805 kernel: audit: type=1103 audit(1768353950.298:860): pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.324822 kernel: audit: type=1006 audit(1768353950.299:861): pid=5489 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 01:25:50.299000 audit[5489]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd175b2c0 a2=3 a3=0 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:50.332555 systemd-logind[1619]: New session 25 of user core. Jan 14 01:25:50.339853 kernel: audit: type=1300 audit(1768353950.299:861): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffd175b2c0 a2=3 a3=0 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:50.342132 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 01:25:50.299000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:50.346306 kernel: audit: type=1327 audit(1768353950.299:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:50.349000 audit[5489]: USER_START pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.356821 kernel: audit: type=1105 audit(1768353950.349:862): pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.358000 audit[5494]: CRED_ACQ pid=5494 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:50.364816 kernel: audit: type=1103 audit(1768353950.358:863): pid=5494 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.030519 sshd[5494]: Connection closed by 68.220.241.50 port 48196 Jan 14 01:25:51.034073 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:51.037000 audit[5489]: USER_END pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.046913 kernel: audit: type=1106 audit(1768353951.037:864): pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.048673 systemd[1]: sshd@21-10.230.32.214:22-68.220.241.50:48196.service: Deactivated successfully. Jan 14 01:25:51.053302 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 01:25:51.037000 audit[5489]: CRED_DISP pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.058913 kernel: audit: type=1104 audit(1768353951.037:865): pid=5489 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.061928 systemd-logind[1619]: Session 25 logged out. Waiting for processes to exit. Jan 14 01:25:51.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.32.214:22-68.220.241.50:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:51.069838 kernel: audit: type=1131 audit(1768353951.048:866): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.32.214:22-68.220.241.50:48196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:51.069692 systemd-logind[1619]: Removed session 25. Jan 14 01:25:51.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.32.214:22-68.220.241.50:48198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:51.133881 systemd[1]: Started sshd@22-10.230.32.214:22-68.220.241.50:48198.service - OpenSSH per-connection server daemon (68.220.241.50:48198). Jan 14 01:25:51.655000 audit[5504]: USER_ACCT pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.656875 sshd[5504]: Accepted publickey for core from 68.220.241.50 port 48198 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:51.657000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.657000 audit[5504]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff47f08fa0 a2=3 a3=0 items=0 ppid=1 pid=5504 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:51.657000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:51.660370 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:51.670008 systemd-logind[1619]: New session 26 of user core. Jan 14 01:25:51.675086 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 01:25:51.680000 audit[5504]: USER_START pid=5504 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:51.685000 audit[5508]: CRED_ACQ pid=5508 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:52.116172 sshd[5508]: Connection closed by 68.220.241.50 port 48198 Jan 14 01:25:52.120134 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:52.122000 audit[5504]: USER_END pid=5504 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:52.123000 audit[5504]: CRED_DISP pid=5504 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:52.127432 systemd-logind[1619]: Session 26 logged out. Waiting for processes to exit. Jan 14 01:25:52.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.32.214:22-68.220.241.50:48198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:52.127659 systemd[1]: sshd@22-10.230.32.214:22-68.220.241.50:48198.service: Deactivated successfully. Jan 14 01:25:52.131089 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 01:25:52.137155 systemd-logind[1619]: Removed session 26. Jan 14 01:25:52.796534 kubelet[2962]: E0114 01:25:52.796393 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:25:53.797981 containerd[1645]: time="2026-01-14T01:25:53.797826058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:25:54.121143 containerd[1645]: time="2026-01-14T01:25:54.120942061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:54.122387 containerd[1645]: time="2026-01-14T01:25:54.122332852Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:25:54.122491 containerd[1645]: time="2026-01-14T01:25:54.122456590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:54.129057 kubelet[2962]: E0114 01:25:54.128950 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:25:54.132297 kubelet[2962]: E0114 01:25:54.132249 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:25:54.135647 kubelet[2962]: E0114 01:25:54.135566 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:419b9865beaf4f708afcaeed39d95459,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:54.139729 containerd[1645]: time="2026-01-14T01:25:54.139681456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:25:54.488592 containerd[1645]: time="2026-01-14T01:25:54.488506797Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:54.489981 containerd[1645]: time="2026-01-14T01:25:54.489917565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:25:54.490090 containerd[1645]: time="2026-01-14T01:25:54.490061042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:54.490454 kubelet[2962]: E0114 01:25:54.490385 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:25:54.490558 kubelet[2962]: E0114 01:25:54.490473 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:25:54.491502 kubelet[2962]: E0114 01:25:54.491403 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gw54k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6f9465cb95-lvhtr_calico-system(abdf21ba-b557-4e34-8da0-caa287f29fb9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:54.492666 kubelet[2962]: E0114 01:25:54.492614 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:25:55.798853 containerd[1645]: time="2026-01-14T01:25:55.798562408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:25:56.120633 containerd[1645]: time="2026-01-14T01:25:56.120460453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:56.122210 containerd[1645]: time="2026-01-14T01:25:56.122159527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:25:56.122310 containerd[1645]: time="2026-01-14T01:25:56.122283627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:56.122688 kubelet[2962]: E0114 01:25:56.122638 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:25:56.123621 kubelet[2962]: E0114 01:25:56.123313 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:25:56.123621 kubelet[2962]: E0114 01:25:56.123509 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:56.128308 containerd[1645]: time="2026-01-14T01:25:56.128021584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:25:56.476355 containerd[1645]: time="2026-01-14T01:25:56.476074754Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:25:56.478442 containerd[1645]: time="2026-01-14T01:25:56.477462671Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:25:56.478442 containerd[1645]: time="2026-01-14T01:25:56.477569508Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:25:56.479255 kubelet[2962]: E0114 01:25:56.478961 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:25:56.479255 kubelet[2962]: E0114 01:25:56.479049 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:25:56.481375 kubelet[2962]: E0114 01:25:56.481314 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b8nhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-vqp7q_calico-system(1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:25:56.483045 kubelet[2962]: E0114 01:25:56.482955 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:25:56.796657 kubelet[2962]: E0114 01:25:56.796133 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:25:57.228551 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 01:25:57.228814 kernel: audit: type=1130 audit(1768353957.219:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.32.214:22-68.220.241.50:52264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:57.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.32.214:22-68.220.241.50:52264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:57.220168 systemd[1]: Started sshd@23-10.230.32.214:22-68.220.241.50:52264.service - OpenSSH per-connection server daemon (68.220.241.50:52264). Jan 14 01:25:57.819000 audit[5561]: USER_ACCT pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.821082 sshd[5561]: Accepted publickey for core from 68.220.241.50 port 52264 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:25:57.825934 kernel: audit: type=1101 audit(1768353957.819:877): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.825744 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:25:57.821000 audit[5561]: CRED_ACQ pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.833846 kernel: audit: type=1103 audit(1768353957.821:878): pid=5561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.848632 kernel: audit: type=1006 audit(1768353957.821:879): pid=5561 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 01:25:57.848861 kernel: audit: type=1300 audit(1768353957.821:879): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ffb9c10 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:57.821000 audit[5561]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe1ffb9c10 a2=3 a3=0 items=0 ppid=1 pid=5561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:25:57.821000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:57.853842 kernel: audit: type=1327 audit(1768353957.821:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:25:57.859373 systemd-logind[1619]: New session 27 of user core. Jan 14 01:25:57.866087 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 01:25:57.881871 kernel: audit: type=1105 audit(1768353957.873:880): pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.873000 audit[5561]: USER_START pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.882000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:57.887878 kernel: audit: type=1103 audit(1768353957.882:881): pid=5565 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:58.363173 sshd[5565]: Connection closed by 68.220.241.50 port 52264 Jan 14 01:25:58.364769 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jan 14 01:25:58.369000 audit[5561]: USER_END pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:58.377067 kernel: audit: type=1106 audit(1768353958.369:882): pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:58.380550 systemd-logind[1619]: Session 27 logged out. Waiting for processes to exit. Jan 14 01:25:58.380929 systemd[1]: sshd@23-10.230.32.214:22-68.220.241.50:52264.service: Deactivated successfully. Jan 14 01:25:58.376000 audit[5561]: CRED_DISP pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:58.387214 kernel: audit: type=1104 audit(1768353958.376:883): pid=5561 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:25:58.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.32.214:22-68.220.241.50:52264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:25:58.387472 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 01:25:58.392982 systemd-logind[1619]: Removed session 27. Jan 14 01:26:00.798376 containerd[1645]: time="2026-01-14T01:26:00.798289387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:26:01.111038 containerd[1645]: time="2026-01-14T01:26:01.110849117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:01.113402 containerd[1645]: time="2026-01-14T01:26:01.112543290Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:26:01.113506 containerd[1645]: time="2026-01-14T01:26:01.113362495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:01.113860 kubelet[2962]: E0114 01:26:01.113799 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:26:01.114492 kubelet[2962]: E0114 01:26:01.113877 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:26:01.114492 kubelet[2962]: E0114 01:26:01.114113 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85cd4c88bf-rxm94_calico-system(6af88ae8-33db-47f6-8963-68a47a1d9783): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:01.116949 kubelet[2962]: E0114 01:26:01.115743 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:26:01.798550 containerd[1645]: time="2026-01-14T01:26:01.798491238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:02.110912 containerd[1645]: time="2026-01-14T01:26:02.110568800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:02.112155 containerd[1645]: time="2026-01-14T01:26:02.112011104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:02.112155 containerd[1645]: time="2026-01-14T01:26:02.112073866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:02.112540 kubelet[2962]: E0114 01:26:02.112470 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:02.112634 kubelet[2962]: E0114 01:26:02.112540 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:02.113145 kubelet[2962]: E0114 01:26:02.112745 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sthtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-zdm6z_calico-apiserver(4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:02.114585 kubelet[2962]: E0114 01:26:02.114550 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:26:02.984000 audit[5586]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.993142 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:26:02.993221 kernel: audit: type=1325 audit(1768353962.984:885): table=filter:148 family=2 entries=26 op=nft_register_rule pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:02.984000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd2d2d3170 a2=0 a3=7ffd2d2d315c items=0 ppid=3108 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:03.002828 kernel: audit: type=1300 audit(1768353962.984:885): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd2d2d3170 a2=0 a3=7ffd2d2d315c items=0 ppid=3108 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:02.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:03.006825 kernel: audit: type=1327 audit(1768353962.984:885): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:03.006000 audit[5586]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:03.006000 audit[5586]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd2d2d3170 a2=0 a3=7ffd2d2d315c items=0 ppid=3108 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:03.013198 kernel: audit: type=1325 audit(1768353963.006:886): table=nat:149 family=2 entries=104 op=nft_register_chain pid=5586 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:26:03.013286 kernel: audit: type=1300 audit(1768353963.006:886): arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffd2d2d3170 a2=0 a3=7ffd2d2d315c items=0 ppid=3108 pid=5586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:03.006000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:03.018531 kernel: audit: type=1327 audit(1768353963.006:886): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:26:03.466214 systemd[1]: Started sshd@24-10.230.32.214:22-68.220.241.50:55852.service - OpenSSH per-connection server daemon (68.220.241.50:55852). Jan 14 01:26:03.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.32.214:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:03.475961 kernel: audit: type=1130 audit(1768353963.466:887): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.32.214:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:04.004000 audit[5589]: USER_ACCT pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.014462 sshd[5589]: Accepted publickey for core from 68.220.241.50 port 55852 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:26:04.015397 kernel: audit: type=1101 audit(1768353964.004:888): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.018004 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:04.015000 audit[5589]: CRED_ACQ pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.024831 kernel: audit: type=1103 audit(1768353964.015:889): pid=5589 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.025388 kernel: audit: type=1006 audit(1768353964.015:890): pid=5589 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 01:26:04.015000 audit[5589]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcf9887b10 a2=3 a3=0 items=0 ppid=1 pid=5589 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:04.015000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:04.035531 systemd-logind[1619]: New session 28 of user core. Jan 14 01:26:04.043280 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 01:26:04.050000 audit[5589]: USER_START pid=5589 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.053000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.465808 sshd[5593]: Connection closed by 68.220.241.50 port 55852 Jan 14 01:26:04.467226 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:04.470000 audit[5589]: USER_END pid=5589 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.471000 audit[5589]: CRED_DISP pid=5589 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:04.482640 systemd[1]: sshd@24-10.230.32.214:22-68.220.241.50:55852.service: Deactivated successfully. Jan 14 01:26:04.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.32.214:22-68.220.241.50:55852 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:04.490190 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 01:26:04.492995 systemd-logind[1619]: Session 28 logged out. Waiting for processes to exit. Jan 14 01:26:04.497942 systemd-logind[1619]: Removed session 28. Jan 14 01:26:06.799572 containerd[1645]: time="2026-01-14T01:26:06.798646778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:26:07.115385 containerd[1645]: time="2026-01-14T01:26:07.115109898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:07.117387 containerd[1645]: time="2026-01-14T01:26:07.117351184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:07.117633 containerd[1645]: time="2026-01-14T01:26:07.117368209Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:26:07.119736 kubelet[2962]: E0114 01:26:07.117916 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:26:07.119736 kubelet[2962]: E0114 01:26:07.118022 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:26:07.119736 kubelet[2962]: E0114 01:26:07.118273 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk45d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pn2c_calico-system(56181817-3b69-45cb-ad6c-2ef729a912ab): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:07.120867 kubelet[2962]: E0114 01:26:07.120805 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab" Jan 14 01:26:08.798121 containerd[1645]: time="2026-01-14T01:26:08.797767724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:26:08.800973 kubelet[2962]: E0114 01:26:08.800568 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6f9465cb95-lvhtr" podUID="abdf21ba-b557-4e34-8da0-caa287f29fb9" Jan 14 01:26:09.115525 containerd[1645]: time="2026-01-14T01:26:09.114726435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:26:09.116605 containerd[1645]: time="2026-01-14T01:26:09.116413070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:26:09.117955 containerd[1645]: time="2026-01-14T01:26:09.116590875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:26:09.118395 kubelet[2962]: E0114 01:26:09.118268 2962 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:09.118395 kubelet[2962]: E0114 01:26:09.118353 2962 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:26:09.119407 kubelet[2962]: E0114 01:26:09.119042 2962 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xh6s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84b8b5c58c-hcr5n_calico-apiserver(611c348f-b209-4156-bf37-8d53c837267b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:26:09.120664 kubelet[2962]: E0114 01:26:09.120594 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-hcr5n" podUID="611c348f-b209-4156-bf37-8d53c837267b" Jan 14 01:26:09.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.32.214:22-68.220.241.50:55860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:09.587905 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 01:26:09.588086 kernel: audit: type=1130 audit(1768353969.568:896): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.32.214:22-68.220.241.50:55860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:09.569599 systemd[1]: Started sshd@25-10.230.32.214:22-68.220.241.50:55860.service - OpenSSH per-connection server daemon (68.220.241.50:55860). Jan 14 01:26:09.805263 kubelet[2962]: E0114 01:26:09.804949 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-vqp7q" podUID="1ef4f5be-e30b-4c3d-8994-2cd80d70e4b2" Jan 14 01:26:10.101000 audit[5604]: USER_ACCT pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.113210 kernel: audit: type=1101 audit(1768353970.101:897): pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.113328 sshd[5604]: Accepted publickey for core from 68.220.241.50 port 55860 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:26:10.108000 audit[5604]: CRED_ACQ pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.112435 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:10.119938 kernel: audit: type=1103 audit(1768353970.108:898): pid=5604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.132826 kernel: audit: type=1006 audit(1768353970.108:899): pid=5604 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 01:26:10.133817 systemd-logind[1619]: New session 29 of user core. Jan 14 01:26:10.108000 audit[5604]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7036840 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:10.143902 kernel: audit: type=1300 audit(1768353970.108:899): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb7036840 a2=3 a3=0 items=0 ppid=1 pid=5604 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:10.108000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:10.146939 kernel: audit: type=1327 audit(1768353970.108:899): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:10.146471 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 01:26:10.155000 audit[5604]: USER_START pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.163820 kernel: audit: type=1105 audit(1768353970.155:900): pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.164626 kernel: audit: type=1103 audit(1768353970.162:901): pid=5608 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.162000 audit[5608]: CRED_ACQ pid=5608 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.526320 sshd[5608]: Connection closed by 68.220.241.50 port 55860 Jan 14 01:26:10.530167 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:10.531000 audit[5604]: USER_END pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.542968 kernel: audit: type=1106 audit(1768353970.531:902): pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.531000 audit[5604]: CRED_DISP pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.544343 systemd[1]: sshd@25-10.230.32.214:22-68.220.241.50:55860.service: Deactivated successfully. Jan 14 01:26:10.548925 kernel: audit: type=1104 audit(1768353970.531:903): pid=5604 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:10.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.32.214:22-68.220.241.50:55860 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:10.551746 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 01:26:10.559595 systemd-logind[1619]: Session 29 logged out. Waiting for processes to exit. Jan 14 01:26:10.562179 systemd-logind[1619]: Removed session 29. Jan 14 01:26:12.795422 kubelet[2962]: E0114 01:26:12.795341 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-85cd4c88bf-rxm94" podUID="6af88ae8-33db-47f6-8963-68a47a1d9783" Jan 14 01:26:15.637183 systemd[1]: Started sshd@26-10.230.32.214:22-68.220.241.50:47060.service - OpenSSH per-connection server daemon (68.220.241.50:47060). Jan 14 01:26:15.639826 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:26:15.639990 kernel: audit: type=1130 audit(1768353975.636:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.32.214:22-68.220.241.50:47060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:15.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.32.214:22-68.220.241.50:47060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:16.181000 audit[5619]: USER_ACCT pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.188406 sshd[5619]: Accepted publickey for core from 68.220.241.50 port 47060 ssh2: RSA SHA256:e91t6O3GkYl7ypLqzDPBiIgUHUBCDY7PrmhrSf9cZ2Y Jan 14 01:26:16.189905 kernel: audit: type=1101 audit(1768353976.181:906): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.192340 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:26:16.189000 audit[5619]: CRED_ACQ pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.198942 kernel: audit: type=1103 audit(1768353976.189:907): pid=5619 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.208696 kernel: audit: type=1006 audit(1768353976.189:908): pid=5619 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 01:26:16.208777 kernel: audit: type=1300 audit(1768353976.189:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c5369c0 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:16.189000 audit[5619]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4c5369c0 a2=3 a3=0 items=0 ppid=1 pid=5619 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:26:16.211960 kernel: audit: type=1327 audit(1768353976.189:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:16.189000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:26:16.220867 systemd-logind[1619]: New session 30 of user core. Jan 14 01:26:16.229127 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 01:26:16.234000 audit[5619]: USER_START pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.248878 kernel: audit: type=1105 audit(1768353976.234:909): pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.251000 audit[5623]: CRED_ACQ pid=5623 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.257913 kernel: audit: type=1103 audit(1768353976.251:910): pid=5623 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.582511 sshd[5623]: Connection closed by 68.220.241.50 port 47060 Jan 14 01:26:16.583744 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jan 14 01:26:16.588000 audit[5619]: USER_END pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.598537 kernel: audit: type=1106 audit(1768353976.588:911): pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.599536 systemd[1]: sshd@26-10.230.32.214:22-68.220.241.50:47060.service: Deactivated successfully. Jan 14 01:26:16.588000 audit[5619]: CRED_DISP pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.607902 kernel: audit: type=1104 audit(1768353976.588:912): pid=5619 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 01:26:16.608479 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 01:26:16.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.32.214:22-68.220.241.50:47060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:26:16.611884 systemd-logind[1619]: Session 30 logged out. Waiting for processes to exit. Jan 14 01:26:16.615853 systemd-logind[1619]: Removed session 30. Jan 14 01:26:16.796147 kubelet[2962]: E0114 01:26:16.796086 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84b8b5c58c-zdm6z" podUID="4b48dc0c-ba9e-44a4-9e2b-58fe2a3b2fa1" Jan 14 01:26:17.799424 kubelet[2962]: E0114 01:26:17.799163 2962 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pn2c" podUID="56181817-3b69-45cb-ad6c-2ef729a912ab"