Jan 14 06:35:48.306458 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 03:30:44 -00 2026 Jan 14 06:35:48.306523 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:35:48.306539 kernel: BIOS-provided physical RAM map: Jan 14 06:35:48.306550 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 06:35:48.306571 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 06:35:48.306583 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 06:35:48.306595 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 14 06:35:48.306611 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 14 06:35:48.306623 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 06:35:48.306634 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 06:35:48.306646 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 06:35:48.306657 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 06:35:48.306668 kernel: NX (Execute Disable) protection: active Jan 14 06:35:48.306689 kernel: APIC: Static calls initialized Jan 14 06:35:48.306703 kernel: SMBIOS 2.8 present. Jan 14 06:35:48.306716 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 14 06:35:48.306728 kernel: DMI: Memory slots populated: 1/1 Jan 14 06:35:48.306766 kernel: Hypervisor detected: KVM Jan 14 06:35:48.306779 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 06:35:48.306791 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 06:35:48.306804 kernel: kvm-clock: using sched offset of 5010570560 cycles Jan 14 06:35:48.306828 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 06:35:48.306841 kernel: tsc: Detected 2799.998 MHz processor Jan 14 06:35:48.306854 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 06:35:48.306867 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 06:35:48.306892 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 06:35:48.306905 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 06:35:48.306918 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 06:35:48.306930 kernel: Using GB pages for direct mapping Jan 14 06:35:48.306943 kernel: ACPI: Early table checksum verification disabled Jan 14 06:35:48.306955 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 14 06:35:48.306968 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.306980 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307003 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307016 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 14 06:35:48.307029 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307041 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307053 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307066 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:35:48.307078 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 14 06:35:48.307111 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 14 06:35:48.307124 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 14 06:35:48.307137 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 14 06:35:48.307150 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 14 06:35:48.307173 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 14 06:35:48.307186 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 14 06:35:48.307199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 06:35:48.307212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 14 06:35:48.307224 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 14 06:35:48.307237 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 14 06:35:48.307251 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 14 06:35:48.307273 kernel: Zone ranges: Jan 14 06:35:48.307286 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 06:35:48.307299 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 14 06:35:48.307312 kernel: Normal empty Jan 14 06:35:48.307325 kernel: Device empty Jan 14 06:35:48.307337 kernel: Movable zone start for each node Jan 14 06:35:48.307350 kernel: Early memory node ranges Jan 14 06:35:48.307363 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 06:35:48.307385 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 14 06:35:48.307399 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 14 06:35:48.307412 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 06:35:48.307426 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 06:35:48.307439 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 14 06:35:48.307452 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 06:35:48.307469 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 06:35:48.307491 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 06:35:48.307505 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 06:35:48.307519 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 06:35:48.307532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 06:35:48.307545 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 06:35:48.307558 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 06:35:48.307570 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 06:35:48.307583 kernel: TSC deadline timer available Jan 14 06:35:48.307606 kernel: CPU topo: Max. logical packages: 16 Jan 14 06:35:48.307619 kernel: CPU topo: Max. logical dies: 16 Jan 14 06:35:48.307632 kernel: CPU topo: Max. dies per package: 1 Jan 14 06:35:48.307645 kernel: CPU topo: Max. threads per core: 1 Jan 14 06:35:48.307657 kernel: CPU topo: Num. cores per package: 1 Jan 14 06:35:48.307670 kernel: CPU topo: Num. threads per package: 1 Jan 14 06:35:48.307683 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 14 06:35:48.307705 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 06:35:48.307719 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 06:35:48.307732 kernel: Booting paravirtualized kernel on KVM Jan 14 06:35:48.307761 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 06:35:48.307777 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 14 06:35:48.307790 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 14 06:35:48.307803 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 14 06:35:48.307837 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 14 06:35:48.307851 kernel: kvm-guest: PV spinlocks enabled Jan 14 06:35:48.307864 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 06:35:48.307878 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:35:48.307892 kernel: random: crng init done Jan 14 06:35:48.307904 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 06:35:48.307918 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 06:35:48.307941 kernel: Fallback order for Node 0: 0 Jan 14 06:35:48.307955 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 14 06:35:48.307968 kernel: Policy zone: DMA32 Jan 14 06:35:48.307981 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 06:35:48.307993 kernel: software IO TLB: area num 16. Jan 14 06:35:48.308006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 14 06:35:48.308019 kernel: Kernel/User page tables isolation: enabled Jan 14 06:35:48.308042 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 06:35:48.308056 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 06:35:48.308068 kernel: Dynamic Preempt: voluntary Jan 14 06:35:48.308081 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 06:35:48.308104 kernel: rcu: RCU event tracing is enabled. Jan 14 06:35:48.308119 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 14 06:35:48.308132 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 06:35:48.308145 kernel: Rude variant of Tasks RCU enabled. Jan 14 06:35:48.308168 kernel: Tracing variant of Tasks RCU enabled. Jan 14 06:35:48.308181 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 06:35:48.308194 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 14 06:35:48.308207 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:35:48.308220 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:35:48.308233 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:35:48.308246 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 14 06:35:48.308269 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 06:35:48.308304 kernel: Console: colour VGA+ 80x25 Jan 14 06:35:48.308327 kernel: printk: legacy console [tty0] enabled Jan 14 06:35:48.308341 kernel: printk: legacy console [ttyS0] enabled Jan 14 06:35:48.308358 kernel: ACPI: Core revision 20240827 Jan 14 06:35:48.308373 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 06:35:48.308386 kernel: x2apic enabled Jan 14 06:35:48.308400 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 06:35:48.308426 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 06:35:48.308449 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 14 06:35:48.308462 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 06:35:48.308488 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 06:35:48.308501 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 06:35:48.308515 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 06:35:48.308550 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 06:35:48.308563 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 06:35:48.308576 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 06:35:48.308589 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 06:35:48.308614 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 06:35:48.308627 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 06:35:48.308640 kernel: MMIO Stale Data: Unknown: No mitigations Jan 14 06:35:48.308653 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 14 06:35:48.308666 kernel: active return thunk: its_return_thunk Jan 14 06:35:48.308679 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 06:35:48.308703 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 06:35:48.308716 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 06:35:48.308730 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 06:35:48.309138 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 06:35:48.309173 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 06:35:48.309187 kernel: Freeing SMP alternatives memory: 32K Jan 14 06:35:48.309200 kernel: pid_max: default: 32768 minimum: 301 Jan 14 06:35:48.309213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 06:35:48.309226 kernel: landlock: Up and running. Jan 14 06:35:48.309240 kernel: SELinux: Initializing. Jan 14 06:35:48.309268 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 06:35:48.309282 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 06:35:48.309296 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 14 06:35:48.309309 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 14 06:35:48.309323 kernel: signal: max sigframe size: 1776 Jan 14 06:35:48.309337 kernel: rcu: Hierarchical SRCU implementation. Jan 14 06:35:48.309350 kernel: rcu: Max phase no-delay instances is 400. Jan 14 06:35:48.309364 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 14 06:35:48.309388 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 06:35:48.309402 kernel: smp: Bringing up secondary CPUs ... Jan 14 06:35:48.309416 kernel: smpboot: x86: Booting SMP configuration: Jan 14 06:35:48.309430 kernel: .... node #0, CPUs: #1 Jan 14 06:35:48.309443 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 06:35:48.309457 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 14 06:35:48.309471 kernel: Memory: 1912060K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 178540K reserved, 0K cma-reserved) Jan 14 06:35:48.309495 kernel: devtmpfs: initialized Jan 14 06:35:48.309509 kernel: x86/mm: Memory block size: 128MB Jan 14 06:35:48.309523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 06:35:48.309536 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 14 06:35:48.309550 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 06:35:48.309563 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 06:35:48.309591 kernel: audit: initializing netlink subsys (disabled) Jan 14 06:35:48.309624 kernel: audit: type=2000 audit(1768372544.910:1): state=initialized audit_enabled=0 res=1 Jan 14 06:35:48.309637 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 06:35:48.309650 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 06:35:48.309663 kernel: cpuidle: using governor menu Jan 14 06:35:48.309677 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 06:35:48.309690 kernel: dca service started, version 1.12.1 Jan 14 06:35:48.309707 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 06:35:48.309721 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 06:35:48.309766 kernel: PCI: Using configuration type 1 for base access Jan 14 06:35:48.309793 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 06:35:48.309815 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 06:35:48.309830 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 06:35:48.309844 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 06:35:48.309857 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 06:35:48.309871 kernel: ACPI: Added _OSI(Module Device) Jan 14 06:35:48.309897 kernel: ACPI: Added _OSI(Processor Device) Jan 14 06:35:48.309911 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 06:35:48.309925 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 06:35:48.309938 kernel: ACPI: Interpreter enabled Jan 14 06:35:48.309952 kernel: ACPI: PM: (supports S0 S5) Jan 14 06:35:48.309965 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 06:35:48.309979 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 06:35:48.310003 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 06:35:48.310016 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 06:35:48.310030 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 06:35:48.310462 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 06:35:48.310716 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 06:35:48.311477 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 06:35:48.311517 kernel: PCI host bridge to bus 0000:00 Jan 14 06:35:48.311777 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 06:35:48.312003 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 06:35:48.312210 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 06:35:48.312415 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 14 06:35:48.312639 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 06:35:48.312877 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 14 06:35:48.313091 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 06:35:48.313377 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 06:35:48.313637 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 14 06:35:48.313908 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 14 06:35:48.314163 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 14 06:35:48.314389 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 14 06:35:48.314621 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 06:35:48.315047 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.315291 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 14 06:35:48.318018 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:35:48.318254 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 06:35:48.318481 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:35:48.318824 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.319065 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 14 06:35:48.319291 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:35:48.319544 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 06:35:48.321863 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:35:48.322133 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.322373 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 14 06:35:48.322606 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:35:48.322899 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 06:35:48.323147 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:35:48.323407 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.323647 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 14 06:35:48.325863 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:35:48.326106 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 06:35:48.326336 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:35:48.326599 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.326899 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 14 06:35:48.327153 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:35:48.327416 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 06:35:48.327639 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:35:48.328620 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.328912 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 14 06:35:48.329139 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:35:48.329362 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 06:35:48.329586 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:35:48.331279 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.331555 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 14 06:35:48.331870 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:35:48.332101 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 06:35:48.332327 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:35:48.332566 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:35:48.332824 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 14 06:35:48.333070 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:35:48.333293 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 06:35:48.333516 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:35:48.333796 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 06:35:48.334040 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 06:35:48.334283 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 14 06:35:48.334521 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 14 06:35:48.334738 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 14 06:35:48.335015 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 06:35:48.335238 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 14 06:35:48.335478 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 14 06:35:48.335706 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 14 06:35:48.336007 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 06:35:48.336232 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 06:35:48.336480 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 06:35:48.336727 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 14 06:35:48.336983 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 14 06:35:48.337238 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 06:35:48.337460 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 06:35:48.337700 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 14 06:35:48.337960 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 14 06:35:48.338187 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:35:48.338477 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 06:35:48.338788 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:35:48.339220 kernel: pci_bus 0000:02: extended config space not accessible Jan 14 06:35:48.339563 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 14 06:35:48.339895 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 14 06:35:48.340132 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:35:48.340412 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 06:35:48.340682 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 14 06:35:48.340996 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:35:48.341286 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 06:35:48.341539 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 14 06:35:48.341787 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:35:48.342100 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:35:48.342335 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:35:48.342563 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:35:48.342843 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:35:48.343069 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:35:48.343106 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 06:35:48.343122 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 06:35:48.343136 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 06:35:48.343150 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 06:35:48.343164 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 06:35:48.343195 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 06:35:48.343210 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 06:35:48.343233 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 06:35:48.343260 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 06:35:48.343274 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 06:35:48.343288 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 06:35:48.343302 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 06:35:48.343316 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 06:35:48.343329 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 06:35:48.343353 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 06:35:48.343368 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 06:35:48.343382 kernel: iommu: Default domain type: Translated Jan 14 06:35:48.343396 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 06:35:48.343410 kernel: PCI: Using ACPI for IRQ routing Jan 14 06:35:48.343424 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 06:35:48.343438 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 06:35:48.343452 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 14 06:35:48.343696 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 06:35:48.343959 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 06:35:48.344180 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 06:35:48.344201 kernel: vgaarb: loaded Jan 14 06:35:48.344215 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 06:35:48.344229 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 06:35:48.344259 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 06:35:48.344274 kernel: pnp: PnP ACPI init Jan 14 06:35:48.344536 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 06:35:48.344558 kernel: pnp: PnP ACPI: found 5 devices Jan 14 06:35:48.344572 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 06:35:48.344586 kernel: NET: Registered PF_INET protocol family Jan 14 06:35:48.344612 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 06:35:48.344641 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 06:35:48.344656 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 06:35:48.344683 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 06:35:48.344696 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 06:35:48.344710 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 06:35:48.344723 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 06:35:48.344749 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 06:35:48.344792 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 06:35:48.344816 kernel: NET: Registered PF_XDP protocol family Jan 14 06:35:48.345040 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 14 06:35:48.345368 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 06:35:48.345619 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 06:35:48.345929 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 06:35:48.346171 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 06:35:48.346391 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 06:35:48.346612 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 06:35:48.346903 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 06:35:48.347126 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 06:35:48.347346 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 06:35:48.347566 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 06:35:48.347911 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 06:35:48.348157 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 06:35:48.348371 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 06:35:48.348612 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 06:35:48.348865 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 06:35:48.349097 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:35:48.349388 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 06:35:48.349634 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:35:48.349935 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 06:35:48.350181 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 06:35:48.350407 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:35:48.350642 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:35:48.350902 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 06:35:48.351144 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 06:35:48.351364 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:35:48.351615 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:35:48.351871 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 06:35:48.352094 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 06:35:48.352334 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:35:48.352556 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:35:48.352802 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 06:35:48.353039 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 06:35:48.353292 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:35:48.353532 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:35:48.353773 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 06:35:48.354028 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 06:35:48.354272 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:35:48.354480 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:35:48.354728 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 06:35:48.354995 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 06:35:48.355231 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:35:48.355468 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:35:48.355702 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 06:35:48.355970 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 06:35:48.356212 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:35:48.356461 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:35:48.356680 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 06:35:48.356951 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 06:35:48.357173 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:35:48.357434 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 06:35:48.357642 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 06:35:48.357878 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 06:35:48.358086 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 14 06:35:48.358290 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 06:35:48.358514 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 14 06:35:48.358788 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 06:35:48.359018 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 14 06:35:48.359239 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:35:48.359482 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 14 06:35:48.359736 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 14 06:35:48.359983 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 14 06:35:48.360205 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:35:48.360441 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 14 06:35:48.360652 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 14 06:35:48.360895 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:35:48.361138 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 14 06:35:48.361349 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 14 06:35:48.361559 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:35:48.361798 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 14 06:35:48.362027 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 14 06:35:48.362256 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:35:48.362486 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 14 06:35:48.362697 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 14 06:35:48.362945 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:35:48.363200 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 14 06:35:48.363412 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 14 06:35:48.363655 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:35:48.363940 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 14 06:35:48.364153 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 14 06:35:48.364385 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:35:48.364419 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 06:35:48.364434 kernel: PCI: CLS 0 bytes, default 64 Jan 14 06:35:48.364463 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 06:35:48.364491 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 14 06:35:48.364507 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 06:35:48.364522 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 06:35:48.364536 kernel: Initialise system trusted keyrings Jan 14 06:35:48.364552 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 06:35:48.364566 kernel: Key type asymmetric registered Jan 14 06:35:48.364592 kernel: Asymmetric key parser 'x509' registered Jan 14 06:35:48.364606 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 06:35:48.364621 kernel: io scheduler mq-deadline registered Jan 14 06:35:48.364636 kernel: io scheduler kyber registered Jan 14 06:35:48.364650 kernel: io scheduler bfq registered Jan 14 06:35:48.364911 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 14 06:35:48.365136 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 14 06:35:48.365379 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.365609 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 14 06:35:48.365879 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 14 06:35:48.366103 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.366350 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 14 06:35:48.366591 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 14 06:35:48.366849 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.367075 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 14 06:35:48.367297 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 14 06:35:48.367530 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.367797 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 14 06:35:48.368038 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 14 06:35:48.368262 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.368486 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 14 06:35:48.368708 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 14 06:35:48.368978 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.369203 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 14 06:35:48.369447 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 14 06:35:48.369722 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.369983 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 14 06:35:48.370224 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 14 06:35:48.370465 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:35:48.370486 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 06:35:48.370501 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 06:35:48.370515 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 06:35:48.370541 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 06:35:48.370570 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 06:35:48.370584 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 06:35:48.370599 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 06:35:48.370613 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 06:35:48.370626 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 06:35:48.370925 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 06:35:48.371156 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 06:35:48.371401 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T06:35:46 UTC (1768372546) Jan 14 06:35:48.371645 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 14 06:35:48.371673 kernel: intel_pstate: CPU model not supported Jan 14 06:35:48.371688 kernel: NET: Registered PF_INET6 protocol family Jan 14 06:35:48.371702 kernel: Segment Routing with IPv6 Jan 14 06:35:48.371723 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 06:35:48.371736 kernel: NET: Registered PF_PACKET protocol family Jan 14 06:35:48.371791 kernel: Key type dns_resolver registered Jan 14 06:35:48.371836 kernel: IPI shorthand broadcast: enabled Jan 14 06:35:48.371851 kernel: sched_clock: Marking stable (2120003595, 221352966)->(2467019706, -125663145) Jan 14 06:35:48.371865 kernel: registered taskstats version 1 Jan 14 06:35:48.371880 kernel: Loading compiled-in X.509 certificates Jan 14 06:35:48.371907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 447f89388dd1db788444733bd6b00fe574646ee9' Jan 14 06:35:48.371922 kernel: Demotion targets for Node 0: null Jan 14 06:35:48.371947 kernel: Key type .fscrypt registered Jan 14 06:35:48.371961 kernel: Key type fscrypt-provisioning registered Jan 14 06:35:48.371976 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 06:35:48.371990 kernel: ima: Allocated hash algorithm: sha1 Jan 14 06:35:48.372004 kernel: ima: No architecture policies found Jan 14 06:35:48.372020 kernel: clk: Disabling unused clocks Jan 14 06:35:48.372034 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 06:35:48.372058 kernel: Write protecting the kernel read-only data: 47104k Jan 14 06:35:48.372073 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 06:35:48.372088 kernel: Run /init as init process Jan 14 06:35:48.372102 kernel: with arguments: Jan 14 06:35:48.372117 kernel: /init Jan 14 06:35:48.372131 kernel: with environment: Jan 14 06:35:48.372145 kernel: HOME=/ Jan 14 06:35:48.372169 kernel: TERM=linux Jan 14 06:35:48.372184 kernel: ACPI: bus type USB registered Jan 14 06:35:48.372198 kernel: usbcore: registered new interface driver usbfs Jan 14 06:35:48.372213 kernel: usbcore: registered new interface driver hub Jan 14 06:35:48.372227 kernel: usbcore: registered new device driver usb Jan 14 06:35:48.372469 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 06:35:48.372719 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 14 06:35:48.373022 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 06:35:48.373296 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 06:35:48.373538 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 14 06:35:48.373766 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 14 06:35:48.374130 kernel: hub 1-0:1.0: USB hub found Jan 14 06:35:48.374401 kernel: hub 1-0:1.0: 4 ports detected Jan 14 06:35:48.374702 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 06:35:48.375018 kernel: hub 2-0:1.0: USB hub found Jan 14 06:35:48.375305 kernel: hub 2-0:1.0: 4 ports detected Jan 14 06:35:48.375327 kernel: SCSI subsystem initialized Jan 14 06:35:48.375342 kernel: libata version 3.00 loaded. Jan 14 06:35:48.375597 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 06:35:48.375644 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 06:35:48.375898 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 06:35:48.376121 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 06:35:48.376342 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 06:35:48.376633 kernel: scsi host0: ahci Jan 14 06:35:48.376933 kernel: scsi host1: ahci Jan 14 06:35:48.377179 kernel: scsi host2: ahci Jan 14 06:35:48.377446 kernel: scsi host3: ahci Jan 14 06:35:48.377678 kernel: scsi host4: ahci Jan 14 06:35:48.377965 kernel: scsi host5: ahci Jan 14 06:35:48.377989 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 14 06:35:48.378020 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 14 06:35:48.378035 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 14 06:35:48.378050 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 14 06:35:48.378064 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 14 06:35:48.378079 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 14 06:35:48.378340 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 06:35:48.378379 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378393 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378408 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 06:35:48.378422 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378436 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378451 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378466 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 06:35:48.378760 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 14 06:35:48.378800 kernel: usbcore: registered new interface driver usbhid Jan 14 06:35:48.378825 kernel: usbhid: USB HID core driver Jan 14 06:35:48.379047 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 14 06:35:48.379069 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 14 06:35:48.379083 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 06:35:48.379115 kernel: GPT:25804799 != 125829119 Jan 14 06:35:48.379391 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 14 06:35:48.379414 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 06:35:48.379429 kernel: GPT:25804799 != 125829119 Jan 14 06:35:48.379443 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 06:35:48.379457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 06:35:48.379486 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 06:35:48.379502 kernel: device-mapper: uevent: version 1.0.3 Jan 14 06:35:48.379517 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 06:35:48.379532 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 06:35:48.379547 kernel: raid6: sse2x4 gen() 8113 MB/s Jan 14 06:35:48.379571 kernel: raid6: sse2x2 gen() 5796 MB/s Jan 14 06:35:48.379587 kernel: raid6: sse2x1 gen() 5603 MB/s Jan 14 06:35:48.379611 kernel: raid6: using algorithm sse2x4 gen() 8113 MB/s Jan 14 06:35:48.379626 kernel: raid6: .... xor() 5133 MB/s, rmw enabled Jan 14 06:35:48.379641 kernel: raid6: using ssse3x2 recovery algorithm Jan 14 06:35:48.379655 kernel: xor: automatically using best checksumming function avx Jan 14 06:35:48.379670 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 06:35:48.379693 kernel: BTRFS: device fsid 2c8f2baf-3f08-4641-b860-b6dd41142f72 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (193) Jan 14 06:35:48.379709 kernel: BTRFS info (device dm-0): first mount of filesystem 2c8f2baf-3f08-4641-b860-b6dd41142f72 Jan 14 06:35:48.379734 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:35:48.379768 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 06:35:48.379784 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 06:35:48.379798 kernel: loop: module loaded Jan 14 06:35:48.379824 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 06:35:48.379839 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 06:35:48.379856 systemd[1]: Successfully made /usr/ read-only. Jan 14 06:35:48.379889 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:35:48.379906 systemd[1]: Detected virtualization kvm. Jan 14 06:35:48.379921 systemd[1]: Detected architecture x86-64. Jan 14 06:35:48.379936 systemd[1]: Running in initrd. Jan 14 06:35:48.379951 systemd[1]: No hostname configured, using default hostname. Jan 14 06:35:48.379967 systemd[1]: Hostname set to . Jan 14 06:35:48.379993 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:35:48.380008 systemd[1]: Queued start job for default target initrd.target. Jan 14 06:35:48.380024 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:35:48.380039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:35:48.380055 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:35:48.380071 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 06:35:48.380096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:35:48.380113 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 06:35:48.380129 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 06:35:48.380144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:35:48.380160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:35:48.380175 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:35:48.380201 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:35:48.380216 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:35:48.380231 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:35:48.380247 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:35:48.380263 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:35:48.380278 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:35:48.380293 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:35:48.380319 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 06:35:48.380334 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 06:35:48.380350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:35:48.380365 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:35:48.380381 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:35:48.380396 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:35:48.380412 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 06:35:48.380438 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 06:35:48.380454 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:35:48.380469 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 06:35:48.380486 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 06:35:48.380501 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 06:35:48.380516 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:35:48.380542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:35:48.380558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:35:48.380574 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 06:35:48.380589 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:35:48.380615 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 06:35:48.380631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 06:35:48.380715 systemd-journald[332]: Collecting audit messages is enabled. Jan 14 06:35:48.380774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 06:35:48.380792 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 06:35:48.380817 kernel: Bridge firewalling registered Jan 14 06:35:48.380835 systemd-journald[332]: Journal started Jan 14 06:35:48.380875 systemd-journald[332]: Runtime Journal (/run/log/journal/dd7b15db1d1b4488bdb1868a8e1b97ca) is 4.7M, max 37.7M, 33M free. Jan 14 06:35:48.345877 systemd-modules-load[333]: Inserted module 'br_netfilter' Jan 14 06:35:48.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.411480 kernel: audit: type=1130 audit(1768372548.403:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.411551 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:35:48.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.414414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:35:48.419901 kernel: audit: type=1130 audit(1768372548.412:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.424950 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:35:48.431531 kernel: audit: type=1130 audit(1768372548.419:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.431568 kernel: audit: type=1130 audit(1768372548.425:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.432757 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 06:35:48.436944 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:35:48.439991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:35:48.443215 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:35:48.468583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:35:48.487148 kernel: audit: type=1130 audit(1768372548.469:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.487201 kernel: audit: type=1130 audit(1768372548.481:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.470063 systemd-tmpfiles[350]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 06:35:48.494173 kernel: audit: type=1130 audit(1768372548.487:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.470626 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:35:48.500961 kernel: audit: type=1130 audit(1768372548.494:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.487554 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:35:48.488678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:35:48.498953 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 06:35:48.507534 kernel: audit: type=1334 audit(1768372548.502:10): prog-id=6 op=LOAD Jan 14 06:35:48.502000 audit: BPF prog-id=6 op=LOAD Jan 14 06:35:48.504914 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:35:48.529370 dracut-cmdline[367]: dracut-109 Jan 14 06:35:48.533592 dracut-cmdline[367]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:35:48.580014 systemd-resolved[368]: Positive Trust Anchors: Jan 14 06:35:48.581048 systemd-resolved[368]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:35:48.581056 systemd-resolved[368]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:35:48.581099 systemd-resolved[368]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:35:48.621553 systemd-resolved[368]: Defaulting to hostname 'linux'. Jan 14 06:35:48.624032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:35:48.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.625662 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:35:48.652797 kernel: Loading iSCSI transport class v2.0-870. Jan 14 06:35:48.671789 kernel: iscsi: registered transport (tcp) Jan 14 06:35:48.700329 kernel: iscsi: registered transport (qla4xxx) Jan 14 06:35:48.700445 kernel: QLogic iSCSI HBA Driver Jan 14 06:35:48.735293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:35:48.776126 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:35:48.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.779776 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:35:48.846134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 06:35:48.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.849502 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 06:35:48.851973 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 06:35:48.893593 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:35:48.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.894000 audit: BPF prog-id=7 op=LOAD Jan 14 06:35:48.895000 audit: BPF prog-id=8 op=LOAD Jan 14 06:35:48.896975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:35:48.931454 systemd-udevd[596]: Using default interface naming scheme 'v257'. Jan 14 06:35:48.947200 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:35:48.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:48.952375 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 06:35:48.994222 dracut-pre-trigger[664]: rd.md=0: removing MD RAID activation Jan 14 06:35:49.003616 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:35:49.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.005000 audit: BPF prog-id=9 op=LOAD Jan 14 06:35:49.007720 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:35:49.038505 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:35:49.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.041733 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:35:49.067397 systemd-networkd[714]: lo: Link UP Jan 14 06:35:49.067412 systemd-networkd[714]: lo: Gained carrier Jan 14 06:35:49.069162 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:35:49.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.070558 systemd[1]: Reached target network.target - Network. Jan 14 06:35:49.192095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:35:49.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.197526 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 06:35:49.312207 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 06:35:49.366314 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 06:35:49.370414 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 06:35:49.395464 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 06:35:49.411187 disk-uuid[767]: Primary Header is updated. Jan 14 06:35:49.411187 disk-uuid[767]: Secondary Entries is updated. Jan 14 06:35:49.411187 disk-uuid[767]: Secondary Header is updated. Jan 14 06:35:49.417444 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:35:49.478798 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 06:35:49.480816 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 06:35:49.524506 kernel: AES CTR mode by8 optimization enabled Jan 14 06:35:49.553054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:35:49.553261 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:35:49.565117 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 06:35:49.565155 kernel: audit: type=1131 audit(1768372549.554:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.554905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:35:49.568532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:35:49.570332 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:35:49.570340 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:35:49.576334 systemd-networkd[714]: eth0: Link UP Jan 14 06:35:49.576649 systemd-networkd[714]: eth0: Gained carrier Jan 14 06:35:49.576666 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:35:49.672972 systemd-networkd[714]: eth0: DHCPv4 address 10.230.41.14/30, gateway 10.230.41.13 acquired from 10.230.41.13 Jan 14 06:35:49.779885 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 06:35:49.790831 kernel: audit: type=1130 audit(1768372549.784:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.785831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:35:49.797150 kernel: audit: type=1130 audit(1768372549.790:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.793515 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:35:49.797829 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:35:49.799565 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:35:49.802941 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 06:35:49.832143 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:35:49.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:49.839871 kernel: audit: type=1130 audit(1768372549.832:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.460631 disk-uuid[768]: Warning: The kernel is still using the old partition table. Jan 14 06:35:50.460631 disk-uuid[768]: The new table will be used at the next reboot or after you Jan 14 06:35:50.460631 disk-uuid[768]: run partprobe(8) or kpartx(8) Jan 14 06:35:50.460631 disk-uuid[768]: The operation has completed successfully. Jan 14 06:35:50.474648 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 06:35:50.475865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 06:35:50.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.480944 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 06:35:50.487592 kernel: audit: type=1130 audit(1768372550.476:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.487634 kernel: audit: type=1131 audit(1768372550.476:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.525799 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (856) Jan 14 06:35:50.529274 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:35:50.529328 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:35:50.536549 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:35:50.536608 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:35:50.545808 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:35:50.547099 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 06:35:50.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.550934 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 06:35:50.555030 kernel: audit: type=1130 audit(1768372550.547:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.884650 ignition[875]: Ignition 2.24.0 Jan 14 06:35:50.884713 ignition[875]: Stage: fetch-offline Jan 14 06:35:50.884929 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:50.884951 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:50.885324 ignition[875]: parsed url from cmdline: "" Jan 14 06:35:50.885333 ignition[875]: no config URL provided Jan 14 06:35:50.885342 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 06:35:50.898051 kernel: audit: type=1130 audit(1768372550.891:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.890856 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:35:50.885371 ignition[875]: no config at "/usr/lib/ignition/user.ign" Jan 14 06:35:50.895048 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 06:35:50.885384 ignition[875]: failed to fetch config: resource requires networking Jan 14 06:35:50.886247 ignition[875]: Ignition finished successfully Jan 14 06:35:50.964995 ignition[881]: Ignition 2.24.0 Jan 14 06:35:50.965020 ignition[881]: Stage: fetch Jan 14 06:35:50.965242 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:50.965259 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:50.965384 ignition[881]: parsed url from cmdline: "" Jan 14 06:35:50.965391 ignition[881]: no config URL provided Jan 14 06:35:50.965402 ignition[881]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 06:35:50.965415 ignition[881]: no config at "/usr/lib/ignition/user.ign" Jan 14 06:35:50.965609 ignition[881]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 06:35:50.965853 ignition[881]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 06:35:50.965884 ignition[881]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 06:35:50.985539 ignition[881]: GET result: OK Jan 14 06:35:50.986442 ignition[881]: parsing config with SHA512: 96a156bce42190ba60efe389aef9203b8d74efcb55a883cc5e110a1759b143b6480a48c8862d39ed85319b9d17f98d7845af9cbe7a103ce8f947abfb563da040 Jan 14 06:35:50.996293 unknown[881]: fetched base config from "system" Jan 14 06:35:50.996312 unknown[881]: fetched base config from "system" Jan 14 06:35:50.996923 ignition[881]: fetch: fetch complete Jan 14 06:35:50.996322 unknown[881]: fetched user config from "openstack" Jan 14 06:35:50.996932 ignition[881]: fetch: fetch passed Jan 14 06:35:51.005102 kernel: audit: type=1130 audit(1768372550.999:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:50.999275 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 06:35:50.997003 ignition[881]: Ignition finished successfully Jan 14 06:35:51.002951 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 06:35:51.038015 ignition[887]: Ignition 2.24.0 Jan 14 06:35:51.038039 ignition[887]: Stage: kargs Jan 14 06:35:51.038277 ignition[887]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:51.038295 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:51.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.041855 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 06:35:51.049950 kernel: audit: type=1130 audit(1768372551.041:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.039554 ignition[887]: kargs: kargs passed Jan 14 06:35:51.045084 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 06:35:51.039628 ignition[887]: Ignition finished successfully Jan 14 06:35:51.071602 ignition[895]: Ignition 2.24.0 Jan 14 06:35:51.071625 ignition[895]: Stage: disks Jan 14 06:35:51.071913 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:51.071930 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:51.073260 ignition[895]: disks: disks passed Jan 14 06:35:51.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.076032 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 06:35:51.073329 ignition[895]: Ignition finished successfully Jan 14 06:35:51.077929 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 06:35:51.078968 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 06:35:51.080356 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:35:51.081647 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:35:51.084151 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:35:51.086928 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 06:35:51.128795 systemd-fsck[903]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 06:35:51.133367 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 06:35:51.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.137027 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 06:35:51.281891 kernel: EXT4-fs (vda9): mounted filesystem 06cc0495-6f26-4e6e-84ba-33c1e3a1737c r/w with ordered data mode. Quota mode: none. Jan 14 06:35:51.281335 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 06:35:51.283697 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 06:35:51.287323 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:35:51.290869 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 06:35:51.293884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 06:35:51.300929 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 06:35:51.301698 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 06:35:51.301771 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:35:51.305765 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 06:35:51.312960 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 06:35:51.324853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (911) Jan 14 06:35:51.324900 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:35:51.326936 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:35:51.334046 systemd-networkd[714]: eth0: Gained IPv6LL Jan 14 06:35:51.345426 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:35:51.345473 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:35:51.351057 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:35:51.420780 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:35:51.570837 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 06:35:51.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.574672 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 06:35:51.576919 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 06:35:51.604408 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 06:35:51.606888 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:35:51.633097 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 06:35:51.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.646905 ignition[1014]: INFO : Ignition 2.24.0 Jan 14 06:35:51.646905 ignition[1014]: INFO : Stage: mount Jan 14 06:35:51.648558 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:51.648558 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:51.651202 ignition[1014]: INFO : mount: mount passed Jan 14 06:35:51.651202 ignition[1014]: INFO : Ignition finished successfully Jan 14 06:35:51.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:51.651275 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 06:35:52.426385 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8a43:24:19ff:fee6:290e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8a43:24:19ff:fee6:290e/64 assigned by NDisc. Jan 14 06:35:52.426405 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 06:35:52.452807 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:35:54.462789 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:35:58.477788 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:35:58.486143 coreos-metadata[913]: Jan 14 06:35:58.486 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:35:58.511959 coreos-metadata[913]: Jan 14 06:35:58.511 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 06:35:58.526169 coreos-metadata[913]: Jan 14 06:35:58.526 INFO Fetch successful Jan 14 06:35:58.527848 coreos-metadata[913]: Jan 14 06:35:58.527 INFO wrote hostname srv-2u6n8.gb1.brightbox.com to /sysroot/etc/hostname Jan 14 06:35:58.530004 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 06:35:58.530267 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 06:35:58.545721 kernel: kauditd_printk_skb: 5 callbacks suppressed Jan 14 06:35:58.545778 kernel: audit: type=1130 audit(1768372558.532:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:58.545803 kernel: audit: type=1131 audit(1768372558.532:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:58.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:58.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:35:58.536876 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 06:35:58.566838 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:35:58.605816 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1029) Jan 14 06:35:58.610784 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:35:58.613763 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:35:58.619312 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:35:58.619369 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:35:58.622427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:35:58.660131 ignition[1046]: INFO : Ignition 2.24.0 Jan 14 06:35:58.660131 ignition[1046]: INFO : Stage: files Jan 14 06:35:58.662216 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:35:58.662216 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:35:58.662216 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Jan 14 06:35:58.665460 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 06:35:58.665460 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 06:35:58.669499 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 06:35:58.670583 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 06:35:58.673391 unknown[1046]: wrote ssh authorized keys file for user: core Jan 14 06:35:58.674607 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 06:35:58.675818 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 06:35:58.677047 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 14 06:35:58.870060 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 06:35:59.187546 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:35:59.188968 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:35:59.197587 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 14 06:35:59.520900 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 06:36:01.231975 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 14 06:36:01.231975 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 06:36:01.235231 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:36:01.237984 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:36:01.237984 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 06:36:01.237984 ignition[1046]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 06:36:01.249392 kernel: audit: type=1130 audit(1768372561.243:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.249533 ignition[1046]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 06:36:01.249533 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:36:01.249533 ignition[1046]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:36:01.249533 ignition[1046]: INFO : files: files passed Jan 14 06:36:01.249533 ignition[1046]: INFO : Ignition finished successfully Jan 14 06:36:01.240498 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 06:36:01.246951 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 06:36:01.253137 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 06:36:01.266071 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 06:36:01.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.267077 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 06:36:01.273627 kernel: audit: type=1130 audit(1768372561.267:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.278843 kernel: audit: type=1131 audit(1768372561.267:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.285366 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:36:01.285366 initrd-setup-root-after-ignition[1078]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:36:01.287797 initrd-setup-root-after-ignition[1082]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:36:01.288963 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:36:01.296236 kernel: audit: type=1130 audit(1768372561.289:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.290641 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 06:36:01.298112 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 06:36:01.366882 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 06:36:01.367052 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 06:36:01.381820 kernel: audit: type=1130 audit(1768372561.367:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.381873 kernel: audit: type=1131 audit(1768372561.367:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.369764 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 06:36:01.382479 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 06:36:01.384310 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 06:36:01.385961 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 06:36:01.424625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:36:01.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.431805 kernel: audit: type=1130 audit(1768372561.425:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.433782 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 06:36:01.462689 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:36:01.465348 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:36:01.466229 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:36:01.467926 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 06:36:01.469281 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 06:36:01.476084 kernel: audit: type=1131 audit(1768372561.470:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.469656 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:36:01.476017 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 06:36:01.476976 systemd[1]: Stopped target basic.target - Basic System. Jan 14 06:36:01.479186 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 06:36:01.480419 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:36:01.481918 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 06:36:01.484259 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:36:01.485225 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 06:36:01.486675 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:36:01.488371 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 06:36:01.489765 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 06:36:01.491404 systemd[1]: Stopped target swap.target - Swaps. Jan 14 06:36:01.492453 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 06:36:01.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.492677 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:36:01.494327 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:36:01.495325 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:36:01.496721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 06:36:01.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.496931 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:36:01.498258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 06:36:01.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.498518 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 06:36:01.500190 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 06:36:01.500375 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:36:01.502263 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 06:36:01.502524 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 06:36:01.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.510028 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 06:36:01.510706 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 06:36:01.510892 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:36:01.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.516967 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 06:36:01.517630 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 06:36:01.517907 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:36:01.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.521016 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 06:36:01.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.521261 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:36:01.524985 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 06:36:01.525351 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:36:01.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.535190 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 06:36:01.536103 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 06:36:01.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.545540 ignition[1102]: INFO : Ignition 2.24.0 Jan 14 06:36:01.545540 ignition[1102]: INFO : Stage: umount Jan 14 06:36:01.547150 ignition[1102]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:36:01.547150 ignition[1102]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:36:01.550831 ignition[1102]: INFO : umount: umount passed Jan 14 06:36:01.550831 ignition[1102]: INFO : Ignition finished successfully Jan 14 06:36:01.551933 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 06:36:01.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.552107 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 06:36:01.555817 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 06:36:01.555965 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 06:36:01.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.558316 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 06:36:01.559146 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 06:36:01.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.560647 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 06:36:01.561473 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 06:36:01.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.563066 systemd[1]: Stopped target network.target - Network. Jan 14 06:36:01.564400 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 06:36:01.565338 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:36:01.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.567112 systemd[1]: Stopped target paths.target - Path Units. Jan 14 06:36:01.568441 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 06:36:01.571829 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:36:01.572622 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 06:36:01.574216 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 06:36:01.575553 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 06:36:01.575635 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:36:01.576780 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 06:36:01.576839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:36:01.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.578050 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 06:36:01.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.578146 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:36:01.579336 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 06:36:01.579447 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 06:36:01.580630 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 06:36:01.580705 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 06:36:01.582135 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 06:36:01.584647 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 06:36:01.593586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 06:36:01.595443 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 06:36:01.595655 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 06:36:01.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.600291 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 06:36:01.600456 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 06:36:01.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.603704 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 06:36:01.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.603863 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 06:36:01.609000 audit: BPF prog-id=9 op=UNLOAD Jan 14 06:36:01.609000 audit: BPF prog-id=6 op=UNLOAD Jan 14 06:36:01.610156 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 06:36:01.611592 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 06:36:01.611662 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:36:01.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.613067 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 06:36:01.613161 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 06:36:01.615654 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 06:36:01.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.617064 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 06:36:01.617159 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:36:01.619869 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 06:36:01.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.619940 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:36:01.622321 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 06:36:01.622399 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 06:36:01.626066 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:36:01.637201 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 06:36:01.638302 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:36:01.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.642026 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 06:36:01.642119 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 06:36:01.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.644181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 06:36:01.644240 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:36:01.644919 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 06:36:01.644991 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:36:01.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.647849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 06:36:01.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.647933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 06:36:01.650303 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 06:36:01.650380 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:36:01.653725 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 06:36:01.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.656639 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 06:36:01.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.656732 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:36:01.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.657664 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 06:36:01.657736 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:36:01.659375 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:36:01.659463 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:36:01.661805 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 06:36:01.661963 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 06:36:01.683568 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 06:36:01.683804 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 06:36:01.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:01.685645 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 06:36:01.687905 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 06:36:01.709249 systemd[1]: Switching root. Jan 14 06:36:01.748418 systemd-journald[332]: Journal stopped Jan 14 06:36:03.288104 systemd-journald[332]: Received SIGTERM from PID 1 (systemd). Jan 14 06:36:03.288201 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 06:36:03.288228 kernel: SELinux: policy capability open_perms=1 Jan 14 06:36:03.288255 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 06:36:03.288285 kernel: SELinux: policy capability always_check_network=0 Jan 14 06:36:03.288305 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 06:36:03.288332 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 06:36:03.288352 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 06:36:03.288377 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 06:36:03.288398 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 06:36:03.288431 systemd[1]: Successfully loaded SELinux policy in 74.620ms. Jan 14 06:36:03.288469 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.152ms. Jan 14 06:36:03.288495 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:36:03.288517 systemd[1]: Detected virtualization kvm. Jan 14 06:36:03.288539 systemd[1]: Detected architecture x86-64. Jan 14 06:36:03.288561 systemd[1]: Detected first boot. Jan 14 06:36:03.288583 systemd[1]: Hostname set to . Jan 14 06:36:03.288609 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:36:03.288631 zram_generator::config[1145]: No configuration found. Jan 14 06:36:03.288654 kernel: Guest personality initialized and is inactive Jan 14 06:36:03.288674 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 06:36:03.288694 kernel: Initialized host personality Jan 14 06:36:03.288714 kernel: NET: Registered PF_VSOCK protocol family Jan 14 06:36:03.288774 systemd[1]: Populated /etc with preset unit settings. Jan 14 06:36:03.288799 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 06:36:03.288820 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 06:36:03.288842 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 06:36:03.288869 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 06:36:03.288892 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 06:36:03.288914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 06:36:03.288939 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 06:36:03.288962 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 06:36:03.288984 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 06:36:03.289006 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 06:36:03.289028 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 06:36:03.289049 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:36:03.289070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:36:03.289097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 06:36:03.289124 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 06:36:03.289146 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 06:36:03.289169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:36:03.289191 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 06:36:03.289217 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:36:03.289240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:36:03.289261 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 06:36:03.289282 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 06:36:03.289308 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 06:36:03.289330 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 06:36:03.289351 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:36:03.289379 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:36:03.289401 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 06:36:03.289422 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:36:03.289444 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:36:03.289481 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 06:36:03.289505 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 06:36:03.289526 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 06:36:03.289554 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:36:03.289576 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 06:36:03.289597 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:36:03.289618 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 06:36:03.289640 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 06:36:03.289662 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:36:03.289684 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:36:03.289705 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 06:36:03.289732 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 06:36:03.289794 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 06:36:03.289831 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 06:36:03.289853 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:03.289882 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 06:36:03.289904 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 06:36:03.289931 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 06:36:03.289953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 06:36:03.289975 systemd[1]: Reached target machines.target - Containers. Jan 14 06:36:03.289996 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 06:36:03.290017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:36:03.290039 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:36:03.290060 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 06:36:03.290086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:36:03.290124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:36:03.290147 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:36:03.290169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 06:36:03.290190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:36:03.290211 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 06:36:03.290234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 06:36:03.290261 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 06:36:03.290283 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 06:36:03.290305 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 06:36:03.290327 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:36:03.290354 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:36:03.290376 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:36:03.290398 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:36:03.290420 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 06:36:03.290442 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 06:36:03.290484 kernel: fuse: init (API version 7.41) Jan 14 06:36:03.290509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:36:03.290537 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:03.290560 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 06:36:03.290581 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 06:36:03.290604 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 06:36:03.290625 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 06:36:03.290646 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 06:36:03.290672 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 06:36:03.290694 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:36:03.290715 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 06:36:03.290737 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 06:36:03.290779 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 06:36:03.290809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:36:03.290835 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:36:03.290869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:36:03.290891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:36:03.290914 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 06:36:03.290936 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 06:36:03.290962 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:36:03.290984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:36:03.291005 kernel: ACPI: bus type drm_connector registered Jan 14 06:36:03.291026 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:36:03.291048 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:36:03.291071 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:36:03.291097 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:36:03.291120 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 06:36:03.291175 systemd-journald[1233]: Collecting audit messages is enabled. Jan 14 06:36:03.291213 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 06:36:03.291236 systemd-journald[1233]: Journal started Jan 14 06:36:03.291267 systemd-journald[1233]: Runtime Journal (/run/log/journal/dd7b15db1d1b4488bdb1868a8e1b97ca) is 4.7M, max 37.7M, 33M free. Jan 14 06:36:03.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.089000 audit: BPF prog-id=14 op=UNLOAD Jan 14 06:36:03.089000 audit: BPF prog-id=13 op=UNLOAD Jan 14 06:36:03.095000 audit: BPF prog-id=15 op=LOAD Jan 14 06:36:03.095000 audit: BPF prog-id=16 op=LOAD Jan 14 06:36:03.095000 audit: BPF prog-id=17 op=LOAD Jan 14 06:36:03.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.240000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.284000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 06:36:03.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.284000 audit[1233]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7fff1493be80 a2=4000 a3=0 items=0 ppid=1 pid=1233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:03.284000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 06:36:03.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:02.826660 systemd[1]: Queued start job for default target multi-user.target. Jan 14 06:36:02.852137 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 06:36:02.852960 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 06:36:03.293843 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:36:03.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.312500 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:36:03.314185 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 06:36:03.314975 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 06:36:03.315021 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:36:03.316994 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 06:36:03.317927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:36:03.318088 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:36:03.322002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 06:36:03.325018 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 06:36:03.325826 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:36:03.328977 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 06:36:03.331873 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:36:03.335925 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:36:03.338701 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 06:36:03.345047 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 06:36:03.368041 systemd-journald[1233]: Time spent on flushing to /var/log/journal/dd7b15db1d1b4488bdb1868a8e1b97ca is 53.278ms for 1287 entries. Jan 14 06:36:03.368041 systemd-journald[1233]: System Journal (/var/log/journal/dd7b15db1d1b4488bdb1868a8e1b97ca) is 8M, max 588.1M, 580.1M free. Jan 14 06:36:03.446040 systemd-journald[1233]: Received client request to flush runtime journal. Jan 14 06:36:03.446136 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 06:36:03.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.386799 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 06:36:03.387856 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 06:36:03.393173 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 06:36:03.408254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:36:03.450915 kernel: loop2: detected capacity change from 0 to 8 Jan 14 06:36:03.451268 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 06:36:03.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.469445 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 06:36:03.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.480826 kernel: loop3: detected capacity change from 0 to 50784 Jan 14 06:36:03.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.481920 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 06:36:03.484000 audit: BPF prog-id=18 op=LOAD Jan 14 06:36:03.486000 audit: BPF prog-id=19 op=LOAD Jan 14 06:36:03.486000 audit: BPF prog-id=20 op=LOAD Jan 14 06:36:03.488997 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 06:36:03.491000 audit: BPF prog-id=21 op=LOAD Jan 14 06:36:03.495036 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:36:03.498009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:36:03.527826 kernel: loop4: detected capacity change from 0 to 224512 Jan 14 06:36:03.528000 audit: BPF prog-id=22 op=LOAD Jan 14 06:36:03.528000 audit: BPF prog-id=23 op=LOAD Jan 14 06:36:03.528000 audit: BPF prog-id=24 op=LOAD Jan 14 06:36:03.530493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 06:36:03.533000 audit: BPF prog-id=25 op=LOAD Jan 14 06:36:03.533000 audit: BPF prog-id=26 op=LOAD Jan 14 06:36:03.533000 audit: BPF prog-id=27 op=LOAD Jan 14 06:36:03.535913 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 06:36:03.587772 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 06:36:03.591894 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jan 14 06:36:03.591919 systemd-tmpfiles[1302]: ACLs are not supported, ignoring. Jan 14 06:36:03.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.602855 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:36:03.604180 kernel: kauditd_printk_skb: 97 callbacks suppressed Jan 14 06:36:03.604233 kernel: audit: type=1130 audit(1768372563.602:143): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.620524 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:36:03.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.626874 kernel: audit: type=1130 audit(1768372563.621:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.626940 kernel: loop6: detected capacity change from 0 to 8 Jan 14 06:36:03.637832 kernel: loop7: detected capacity change from 0 to 50784 Jan 14 06:36:03.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.641356 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 06:36:03.646793 kernel: audit: type=1130 audit(1768372563.641:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.662778 kernel: loop1: detected capacity change from 0 to 224512 Jan 14 06:36:03.682008 systemd-nsresourced[1305]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 06:36:03.684020 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 06:36:03.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.685312 (sd-merge)[1308]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 14 06:36:03.690002 kernel: audit: type=1130 audit(1768372563.684:146): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:03.700774 (sd-merge)[1308]: Merged extensions into '/usr'. Jan 14 06:36:03.707894 systemd[1]: Reload requested from client PID 1282 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 06:36:03.707926 systemd[1]: Reloading... Jan 14 06:36:04.211794 zram_generator::config[1351]: No configuration found. Jan 14 06:36:04.215639 systemd-oomd[1299]: No swap; memory pressure usage will be degraded Jan 14 06:36:04.294615 systemd-resolved[1300]: Positive Trust Anchors: Jan 14 06:36:04.294798 systemd-resolved[1300]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:36:04.294806 systemd-resolved[1300]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:36:04.294848 systemd-resolved[1300]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:36:04.326925 systemd-resolved[1300]: Using system hostname 'srv-2u6n8.gb1.brightbox.com'. Jan 14 06:36:04.538149 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 06:36:04.539803 systemd[1]: Reloading finished in 831 ms. Jan 14 06:36:04.556796 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 06:36:04.573658 kernel: audit: type=1130 audit(1768372564.566:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.571709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:36:04.572910 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 06:36:04.582766 kernel: audit: type=1130 audit(1768372564.571:148): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.587799 kernel: audit: type=1130 audit(1768372564.573:149): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:04.588729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:36:04.592246 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 06:36:04.600897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 06:36:04.613146 systemd[1]: Starting ensure-sysext.service... Jan 14 06:36:04.633555 kernel: audit: type=1334 audit(1768372564.629:150): prog-id=28 op=LOAD Jan 14 06:36:04.629000 audit: BPF prog-id=28 op=LOAD Jan 14 06:36:04.621964 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:36:04.640610 kernel: audit: type=1334 audit(1768372564.633:151): prog-id=25 op=UNLOAD Jan 14 06:36:04.640701 kernel: audit: type=1334 audit(1768372564.633:152): prog-id=29 op=LOAD Jan 14 06:36:04.633000 audit: BPF prog-id=25 op=UNLOAD Jan 14 06:36:04.633000 audit: BPF prog-id=29 op=LOAD Jan 14 06:36:04.633000 audit: BPF prog-id=30 op=LOAD Jan 14 06:36:04.633000 audit: BPF prog-id=26 op=UNLOAD Jan 14 06:36:04.633000 audit: BPF prog-id=27 op=UNLOAD Jan 14 06:36:04.640000 audit: BPF prog-id=31 op=LOAD Jan 14 06:36:04.644000 audit: BPF prog-id=15 op=UNLOAD Jan 14 06:36:04.644000 audit: BPF prog-id=32 op=LOAD Jan 14 06:36:04.644000 audit: BPF prog-id=33 op=LOAD Jan 14 06:36:04.644000 audit: BPF prog-id=16 op=UNLOAD Jan 14 06:36:04.644000 audit: BPF prog-id=17 op=UNLOAD Jan 14 06:36:04.645000 audit: BPF prog-id=34 op=LOAD Jan 14 06:36:04.646000 audit: BPF prog-id=18 op=UNLOAD Jan 14 06:36:04.646000 audit: BPF prog-id=35 op=LOAD Jan 14 06:36:04.646000 audit: BPF prog-id=36 op=LOAD Jan 14 06:36:04.646000 audit: BPF prog-id=19 op=UNLOAD Jan 14 06:36:04.646000 audit: BPF prog-id=20 op=UNLOAD Jan 14 06:36:04.647000 audit: BPF prog-id=37 op=LOAD Jan 14 06:36:04.647000 audit: BPF prog-id=21 op=UNLOAD Jan 14 06:36:04.649000 audit: BPF prog-id=38 op=LOAD Jan 14 06:36:04.649000 audit: BPF prog-id=22 op=UNLOAD Jan 14 06:36:04.649000 audit: BPF prog-id=39 op=LOAD Jan 14 06:36:04.649000 audit: BPF prog-id=40 op=LOAD Jan 14 06:36:04.649000 audit: BPF prog-id=23 op=UNLOAD Jan 14 06:36:04.649000 audit: BPF prog-id=24 op=UNLOAD Jan 14 06:36:04.664601 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 06:36:04.665661 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 06:36:04.678037 systemd[1]: Reload requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Jan 14 06:36:04.678071 systemd[1]: Reloading... Jan 14 06:36:04.680833 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 06:36:04.680880 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 06:36:04.681317 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 06:36:04.683602 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 14 06:36:04.683699 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Jan 14 06:36:04.692110 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:36:04.692128 systemd-tmpfiles[1409]: Skipping /boot Jan 14 06:36:04.711197 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:36:04.711220 systemd-tmpfiles[1409]: Skipping /boot Jan 14 06:36:04.773776 zram_generator::config[1439]: No configuration found. Jan 14 06:36:05.092741 systemd[1]: Reloading finished in 413 ms. Jan 14 06:36:05.106232 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 06:36:05.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.108000 audit: BPF prog-id=41 op=LOAD Jan 14 06:36:05.108000 audit: BPF prog-id=38 op=UNLOAD Jan 14 06:36:05.108000 audit: BPF prog-id=42 op=LOAD Jan 14 06:36:05.108000 audit: BPF prog-id=43 op=LOAD Jan 14 06:36:05.108000 audit: BPF prog-id=39 op=UNLOAD Jan 14 06:36:05.108000 audit: BPF prog-id=40 op=UNLOAD Jan 14 06:36:05.109000 audit: BPF prog-id=44 op=LOAD Jan 14 06:36:05.109000 audit: BPF prog-id=37 op=UNLOAD Jan 14 06:36:05.110000 audit: BPF prog-id=45 op=LOAD Jan 14 06:36:05.110000 audit: BPF prog-id=34 op=UNLOAD Jan 14 06:36:05.111000 audit: BPF prog-id=46 op=LOAD Jan 14 06:36:05.111000 audit: BPF prog-id=47 op=LOAD Jan 14 06:36:05.111000 audit: BPF prog-id=35 op=UNLOAD Jan 14 06:36:05.111000 audit: BPF prog-id=36 op=UNLOAD Jan 14 06:36:05.113000 audit: BPF prog-id=48 op=LOAD Jan 14 06:36:05.113000 audit: BPF prog-id=28 op=UNLOAD Jan 14 06:36:05.113000 audit: BPF prog-id=49 op=LOAD Jan 14 06:36:05.113000 audit: BPF prog-id=50 op=LOAD Jan 14 06:36:05.113000 audit: BPF prog-id=29 op=UNLOAD Jan 14 06:36:05.113000 audit: BPF prog-id=30 op=UNLOAD Jan 14 06:36:05.115000 audit: BPF prog-id=51 op=LOAD Jan 14 06:36:05.121000 audit: BPF prog-id=31 op=UNLOAD Jan 14 06:36:05.121000 audit: BPF prog-id=52 op=LOAD Jan 14 06:36:05.121000 audit: BPF prog-id=53 op=LOAD Jan 14 06:36:05.121000 audit: BPF prog-id=32 op=UNLOAD Jan 14 06:36:05.121000 audit: BPF prog-id=33 op=UNLOAD Jan 14 06:36:05.126113 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:36:05.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.139022 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:36:05.141235 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 06:36:05.145537 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 06:36:05.279297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 06:36:05.280000 audit: BPF prog-id=7 op=UNLOAD Jan 14 06:36:05.280000 audit: BPF prog-id=8 op=UNLOAD Jan 14 06:36:05.281000 audit: BPF prog-id=54 op=LOAD Jan 14 06:36:05.282000 audit: BPF prog-id=55 op=LOAD Jan 14 06:36:05.285158 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:36:05.293360 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 06:36:05.299982 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.300242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:36:05.303801 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:36:05.315719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:36:05.323627 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:36:05.324645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:36:05.326032 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:36:05.326176 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:36:05.326332 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.339538 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.340891 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:36:05.341175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:36:05.341462 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:36:05.341615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:36:05.341781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.357305 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.357653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:36:05.371486 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:36:05.373448 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:36:05.373717 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:36:05.374358 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:36:05.374584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:36:05.387915 systemd[1]: Finished ensure-sysext.service. Jan 14 06:36:05.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.390000 audit: BPF prog-id=56 op=LOAD Jan 14 06:36:05.395339 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 06:36:05.408000 audit[1507]: SYSTEM_BOOT pid=1507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.421154 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 06:36:05.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.458498 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:36:05.461362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:36:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.465529 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:36:05.472851 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:36:05.473702 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:36:05.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.477459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:36:05.479275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:36:05.482427 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:36:05.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.499215 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:36:05.500680 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:36:05.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.516521 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 06:36:05.546101 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 06:36:05.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:05.547607 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 06:36:05.547000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 06:36:05.547000 audit[1541]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefb4d8a70 a2=420 a3=0 items=0 ppid=1501 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:05.547000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:36:05.548636 augenrules[1541]: No rules Jan 14 06:36:05.550280 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:36:05.551084 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:36:05.573585 systemd-udevd[1505]: Using default interface naming scheme 'v257'. Jan 14 06:36:05.613329 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 06:36:05.615257 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 06:36:05.640899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:36:05.650090 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:36:05.828882 systemd-networkd[1552]: lo: Link UP Jan 14 06:36:05.828896 systemd-networkd[1552]: lo: Gained carrier Jan 14 06:36:05.831788 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:36:05.833192 systemd[1]: Reached target network.target - Network. Jan 14 06:36:05.838174 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 06:36:05.841870 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 06:36:05.900962 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 06:36:05.910840 systemd-networkd[1552]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:36:05.910853 systemd-networkd[1552]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:36:05.915439 systemd-networkd[1552]: eth0: Link UP Jan 14 06:36:05.916625 systemd-networkd[1552]: eth0: Gained carrier Jan 14 06:36:05.916657 systemd-networkd[1552]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:36:05.959551 systemd-networkd[1552]: eth0: DHCPv4 address 10.230.41.14/30, gateway 10.230.41.13 acquired from 10.230.41.13 Jan 14 06:36:05.960656 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 14 06:36:05.988486 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 06:36:06.204469 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 06:36:06.219990 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:36:06.232938 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 06:36:06.284851 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 06:36:06.286869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 06:36:06.293774 kernel: ACPI: button: Power Button [PWRF] Jan 14 06:36:06.374634 ldconfig[1503]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 06:36:06.379407 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 06:36:06.384974 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 06:36:06.390663 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 06:36:06.406801 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 06:36:06.434442 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 06:36:06.435968 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:36:06.438016 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 06:36:06.438855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 06:36:06.440506 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 06:36:06.441937 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 06:36:06.443420 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 06:36:06.444867 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 06:36:06.447990 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 06:36:06.448682 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 06:36:06.449458 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 06:36:06.449500 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:36:06.451300 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:36:06.455084 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 06:36:06.466332 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 06:36:06.477819 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 06:36:06.480045 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 06:36:06.481821 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 06:36:06.488471 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 06:36:06.489939 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 06:36:06.503888 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 06:36:06.506982 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:36:06.508472 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:36:06.509544 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:36:06.510814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:36:06.525396 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 06:36:06.530966 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 06:36:06.539106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 06:36:06.543142 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 06:36:06.550989 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 06:36:06.556338 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 06:36:06.557129 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 06:36:06.561664 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 06:36:06.567684 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 06:36:06.595018 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 06:36:06.601286 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 06:36:06.608165 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 06:36:06.629050 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 06:36:06.629823 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 06:36:06.650825 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:06.630548 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 06:36:06.637076 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 06:36:06.641981 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 06:36:06.667777 extend-filesystems[1602]: Found /dev/vda6 Jan 14 06:36:06.668525 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 06:36:06.675068 jq[1601]: false Jan 14 06:36:06.678662 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 06:36:06.683939 oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 14 06:36:06.685205 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing passwd entry cache Jan 14 06:36:06.686793 extend-filesystems[1602]: Found /dev/vda9 Jan 14 06:36:06.692437 extend-filesystems[1602]: Checking size of /dev/vda9 Jan 14 06:36:06.691961 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 06:36:06.692736 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 06:36:06.693089 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 06:36:06.703954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 06:36:06.704596 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 06:36:06.719310 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 14 06:36:06.719310 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:36:06.719310 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 14 06:36:06.718627 oslogin_cache_refresh[1603]: Failure getting users, quitting Jan 14 06:36:06.718654 oslogin_cache_refresh[1603]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:36:06.718774 oslogin_cache_refresh[1603]: Refreshing group entry cache Jan 14 06:36:06.721243 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 14 06:36:06.721243 google_oslogin_nss_cache[1603]: oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:36:06.720239 oslogin_cache_refresh[1603]: Failure getting groups, quitting Jan 14 06:36:06.720253 oslogin_cache_refresh[1603]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:36:06.723188 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 06:36:06.724832 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 06:36:06.739958 jq[1619]: true Jan 14 06:36:06.741570 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:36:06.744930 update_engine[1618]: I20260114 06:36:06.744836 1618 main.cc:92] Flatcar Update Engine starting Jan 14 06:36:06.786156 dbus-daemon[1598]: [system] SELinux support is enabled Jan 14 06:36:06.786620 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 06:36:06.803384 update_engine[1618]: I20260114 06:36:06.791507 1618 update_check_scheduler.cc:74] Next update check in 5m57s Jan 14 06:36:06.789287 dbus-daemon[1598]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1552 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 06:36:06.794094 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 06:36:06.796134 dbus-daemon[1598]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 14 06:36:06.794145 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 06:36:06.795471 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 06:36:06.795501 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 06:36:06.797242 systemd[1]: Started update-engine.service - Update Engine. Jan 14 06:36:06.817911 extend-filesystems[1602]: Resized partition /dev/vda9 Jan 14 06:36:06.816285 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 06:36:06.843990 extend-filesystems[1653]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 06:36:06.854756 jq[1641]: true Jan 14 06:36:06.857790 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 06:36:06.931773 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 14 06:36:06.962957 tar[1626]: linux-amd64/LICENSE Jan 14 06:36:06.970577 tar[1626]: linux-amd64/helm Jan 14 06:36:07.270825 bash[1672]: Updated "/home/core/.ssh/authorized_keys" Jan 14 06:36:07.270898 systemd-networkd[1552]: eth0: Gained IPv6LL Jan 14 06:36:07.272966 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 14 06:36:07.274221 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 06:36:07.289870 systemd[1]: Starting sshkeys.service... Jan 14 06:36:07.292411 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 06:36:07.296249 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 06:36:07.320873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:07.336014 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 06:36:07.434741 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 14 06:36:07.469419 extend-filesystems[1653]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 06:36:07.469419 extend-filesystems[1653]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 14 06:36:07.469419 extend-filesystems[1653]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 14 06:36:07.476959 extend-filesystems[1602]: Resized filesystem in /dev/vda9 Jan 14 06:36:07.471702 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 06:36:07.472333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 06:36:07.514664 systemd-logind[1614]: Watching system buttons on /dev/input/event3 (Power Button) Jan 14 06:36:07.514712 systemd-logind[1614]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 06:36:07.525344 containerd[1642]: time="2026-01-14T06:36:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 06:36:07.565858 containerd[1642]: time="2026-01-14T06:36:07.564096367Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 06:36:07.649979 systemd-logind[1614]: New seat seat0. Jan 14 06:36:07.674269 containerd[1642]: time="2026-01-14T06:36:07.674211548Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.581µs" Jan 14 06:36:07.674269 containerd[1642]: time="2026-01-14T06:36:07.674260662Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 06:36:07.674440 containerd[1642]: time="2026-01-14T06:36:07.674332790Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 06:36:07.674440 containerd[1642]: time="2026-01-14T06:36:07.674365385Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 06:36:07.674650 containerd[1642]: time="2026-01-14T06:36:07.674607873Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 06:36:07.674650 containerd[1642]: time="2026-01-14T06:36:07.674640163Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:36:07.674824 containerd[1642]: time="2026-01-14T06:36:07.674795788Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:36:07.674878 containerd[1642]: time="2026-01-14T06:36:07.674822392Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675071101Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675109146Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675127088Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675141961Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675463140Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675484076Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 06:36:07.677769 containerd[1642]: time="2026-01-14T06:36:07.675605978Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.698434776Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.698576917Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.698608710Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.698764746Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.699323681Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 06:36:07.700924 containerd[1642]: time="2026-01-14T06:36:07.699489239Z" level=info msg="metadata content store policy set" policy=shared Jan 14 06:36:07.709999 containerd[1642]: time="2026-01-14T06:36:07.709948794Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 06:36:07.710301 containerd[1642]: time="2026-01-14T06:36:07.710058679Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:36:07.710301 containerd[1642]: time="2026-01-14T06:36:07.710256399Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:36:07.710409 containerd[1642]: time="2026-01-14T06:36:07.710307250Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 06:36:07.710409 containerd[1642]: time="2026-01-14T06:36:07.710342923Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 06:36:07.710409 containerd[1642]: time="2026-01-14T06:36:07.710388542Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 06:36:07.710564 containerd[1642]: time="2026-01-14T06:36:07.710410008Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 06:36:07.710564 containerd[1642]: time="2026-01-14T06:36:07.710428081Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 06:36:07.710564 containerd[1642]: time="2026-01-14T06:36:07.710459064Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 06:36:07.710564 containerd[1642]: time="2026-01-14T06:36:07.710495507Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 06:36:07.710564 containerd[1642]: time="2026-01-14T06:36:07.710547682Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 06:36:07.710729 containerd[1642]: time="2026-01-14T06:36:07.710572770Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 06:36:07.710729 containerd[1642]: time="2026-01-14T06:36:07.710591890Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 06:36:07.710729 containerd[1642]: time="2026-01-14T06:36:07.710612040Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 06:36:07.710895 containerd[1642]: time="2026-01-14T06:36:07.710838538Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 06:36:07.710895 containerd[1642]: time="2026-01-14T06:36:07.710879469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 06:36:07.711013 containerd[1642]: time="2026-01-14T06:36:07.710932928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 06:36:07.711013 containerd[1642]: time="2026-01-14T06:36:07.710974347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 06:36:07.711013 containerd[1642]: time="2026-01-14T06:36:07.711003126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711022523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711046818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711081356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711111945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711132511Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 06:36:07.711165 containerd[1642]: time="2026-01-14T06:36:07.711149206Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 06:36:07.713161 containerd[1642]: time="2026-01-14T06:36:07.711206747Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 06:36:07.713161 containerd[1642]: time="2026-01-14T06:36:07.711317429Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 06:36:07.713161 containerd[1642]: time="2026-01-14T06:36:07.711360289Z" level=info msg="Start snapshots syncer" Jan 14 06:36:07.713161 containerd[1642]: time="2026-01-14T06:36:07.711416239Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 06:36:07.719933 containerd[1642]: time="2026-01-14T06:36:07.719730659Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.719908802Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.723778609Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.724150732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.724185413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.724807569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.724830256Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.725345291Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.726815153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.726839821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.726883696Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.726904469Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.726988634Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.727253992Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:36:07.727322 containerd[1642]: time="2026-01-14T06:36:07.727285506Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.727302728Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731212606Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731255737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731291160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731334668Z" level=info msg="runtime interface created" Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731347889Z" level=info msg="created NRI interface" Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731414634Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731446526Z" level=info msg="Connect containerd service" Jan 14 06:36:07.732413 containerd[1642]: time="2026-01-14T06:36:07.731497838Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 06:36:07.869878 containerd[1642]: time="2026-01-14T06:36:07.868326989Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 06:36:07.890002 dbus-daemon[1598]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 06:36:07.898385 dbus-daemon[1598]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1651 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 06:36:08.229967 locksmithd[1654]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 06:36:08.252884 sshd_keygen[1655]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 06:36:08.282597 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 06:36:08.284126 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 06:36:08.294091 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 06:36:08.303971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:36:08.339929 containerd[1642]: time="2026-01-14T06:36:08.334386922Z" level=info msg="Start subscribing containerd event" Jan 14 06:36:08.349767 containerd[1642]: time="2026-01-14T06:36:08.340716830Z" level=info msg="Start recovering state" Jan 14 06:36:08.351776 containerd[1642]: time="2026-01-14T06:36:08.338063527Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 06:36:08.352202 containerd[1642]: time="2026-01-14T06:36:08.352173942Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.351332038Z" level=info msg="Start event monitor" Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354080179Z" level=info msg="Start cni network conf syncer for default" Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354179579Z" level=info msg="Start streaming server" Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354268790Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354447746Z" level=info msg="runtime interface starting up..." Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354510520Z" level=info msg="starting plugins..." Jan 14 06:36:08.355774 containerd[1642]: time="2026-01-14T06:36:08.354643462Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 06:36:08.391310 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 06:36:08.395323 containerd[1642]: time="2026-01-14T06:36:08.393128345Z" level=info msg="containerd successfully booted in 0.866151s" Jan 14 06:36:08.416688 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 06:36:08.428373 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 06:36:08.429210 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 14 06:36:08.434295 systemd-networkd[1552]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8a43:24:19ff:fee6:290e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8a43:24:19ff:fee6:290e/64 assigned by NDisc. Jan 14 06:36:08.434300 systemd-networkd[1552]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 06:36:08.435414 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 06:36:08.561831 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:08.579453 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 06:36:08.591456 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 06:36:08.636545 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 06:36:08.637156 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 06:36:08.643181 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 06:36:08.716860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 06:36:08.727495 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 06:36:08.732447 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 06:36:08.735211 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 06:36:08.792087 polkitd[1722]: Started polkitd version 126 Jan 14 06:36:08.801218 polkitd[1722]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 06:36:08.801663 polkitd[1722]: Loading rules from directory /run/polkit-1/rules.d Jan 14 06:36:08.801734 polkitd[1722]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 06:36:08.804211 polkitd[1722]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 06:36:08.804916 polkitd[1722]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 06:36:08.805123 polkitd[1722]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 06:36:08.811443 polkitd[1722]: Finished loading, compiling and executing 2 rules Jan 14 06:36:08.812305 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 06:36:08.815537 dbus-daemon[1598]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 06:36:08.816315 polkitd[1722]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 06:36:08.854673 systemd-hostnamed[1651]: Hostname set to (static) Jan 14 06:36:09.143877 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:09.169775 tar[1626]: linux-amd64/README.md Jan 14 06:36:09.193209 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 06:36:09.510889 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 14 06:36:09.611108 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:10.373797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:10.394939 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:36:11.182864 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:11.212952 kubelet[1753]: E0114 06:36:11.212813 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:36:11.216460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:36:11.216712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:36:11.218394 systemd[1]: kubelet.service: Consumed 1.946s CPU time, 263.7M memory peak. Jan 14 06:36:11.343191 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 06:36:11.346481 systemd[1]: Started sshd@0-10.230.41.14:22-20.161.92.111:32786.service - OpenSSH per-connection server daemon (20.161.92.111:32786). Jan 14 06:36:11.623798 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:11.900154 sshd[1763]: Accepted publickey for core from 20.161.92.111 port 32786 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:11.903735 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:11.917386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 06:36:11.926242 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 06:36:11.941080 systemd-logind[1614]: New session 1 of user core. Jan 14 06:36:11.960905 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 06:36:11.969792 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 06:36:11.995519 (systemd)[1770]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:12.002562 systemd-logind[1614]: New session 2 of user core. Jan 14 06:36:12.206548 systemd[1770]: Queued start job for default target default.target. Jan 14 06:36:12.219647 systemd[1770]: Created slice app.slice - User Application Slice. Jan 14 06:36:12.219710 systemd[1770]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 06:36:12.219763 systemd[1770]: Reached target paths.target - Paths. Jan 14 06:36:12.219860 systemd[1770]: Reached target timers.target - Timers. Jan 14 06:36:12.222401 systemd[1770]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 06:36:12.225949 systemd[1770]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 06:36:12.247555 systemd[1770]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 06:36:12.247711 systemd[1770]: Reached target sockets.target - Sockets. Jan 14 06:36:12.257477 systemd[1770]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 06:36:12.257685 systemd[1770]: Reached target basic.target - Basic System. Jan 14 06:36:12.257799 systemd[1770]: Reached target default.target - Main User Target. Jan 14 06:36:12.257901 systemd[1770]: Startup finished in 245ms. Jan 14 06:36:12.258534 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 06:36:12.277273 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 06:36:12.571523 systemd[1]: Started sshd@1-10.230.41.14:22-20.161.92.111:42228.service - OpenSSH per-connection server daemon (20.161.92.111:42228). Jan 14 06:36:13.085692 sshd[1784]: Accepted publickey for core from 20.161.92.111 port 42228 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:13.088011 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:13.096454 systemd-logind[1614]: New session 3 of user core. Jan 14 06:36:13.109152 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 06:36:13.359860 sshd[1788]: Connection closed by 20.161.92.111 port 42228 Jan 14 06:36:13.360658 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:13.368313 systemd[1]: sshd@1-10.230.41.14:22-20.161.92.111:42228.service: Deactivated successfully. Jan 14 06:36:13.371460 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 06:36:13.373197 systemd-logind[1614]: Session 3 logged out. Waiting for processes to exit. Jan 14 06:36:13.375564 systemd-logind[1614]: Removed session 3. Jan 14 06:36:13.465929 systemd[1]: Started sshd@2-10.230.41.14:22-20.161.92.111:42238.service - OpenSSH per-connection server daemon (20.161.92.111:42238). Jan 14 06:36:13.836708 login[1733]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:13.844822 systemd-logind[1614]: New session 4 of user core. Jan 14 06:36:13.853111 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 06:36:13.986625 sshd[1794]: Accepted publickey for core from 20.161.92.111 port 42238 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:13.988859 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:13.995697 systemd-logind[1614]: New session 5 of user core. Jan 14 06:36:14.004180 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 06:36:14.183122 login[1734]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:14.190448 systemd-logind[1614]: New session 6 of user core. Jan 14 06:36:14.202106 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 06:36:14.310907 sshd[1814]: Connection closed by 20.161.92.111 port 42238 Jan 14 06:36:14.312951 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:14.319276 systemd-logind[1614]: Session 5 logged out. Waiting for processes to exit. Jan 14 06:36:14.320536 systemd[1]: sshd@2-10.230.41.14:22-20.161.92.111:42238.service: Deactivated successfully. Jan 14 06:36:14.323655 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 06:36:14.326870 systemd-logind[1614]: Removed session 5. Jan 14 06:36:14.476789 systemd[1]: Started sshd@3-10.230.41.14:22-64.225.73.213:41148.service - OpenSSH per-connection server daemon (64.225.73.213:41148). Jan 14 06:36:14.581359 sshd[1831]: Invalid user postgres from 64.225.73.213 port 41148 Jan 14 06:36:14.597394 sshd[1831]: Connection closed by invalid user postgres 64.225.73.213 port 41148 [preauth] Jan 14 06:36:14.600045 systemd[1]: sshd@3-10.230.41.14:22-64.225.73.213:41148.service: Deactivated successfully. Jan 14 06:36:15.212896 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:15.223794 coreos-metadata[1597]: Jan 14 06:36:15.222 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:36:15.251048 coreos-metadata[1597]: Jan 14 06:36:15.250 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 06:36:15.259866 coreos-metadata[1597]: Jan 14 06:36:15.259 INFO Fetch failed with 404: resource not found Jan 14 06:36:15.260014 coreos-metadata[1597]: Jan 14 06:36:15.259 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 06:36:15.260552 coreos-metadata[1597]: Jan 14 06:36:15.260 INFO Fetch successful Jan 14 06:36:15.260860 coreos-metadata[1597]: Jan 14 06:36:15.260 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 06:36:15.275179 coreos-metadata[1597]: Jan 14 06:36:15.275 INFO Fetch successful Jan 14 06:36:15.275389 coreos-metadata[1597]: Jan 14 06:36:15.275 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 06:36:15.294869 coreos-metadata[1597]: Jan 14 06:36:15.294 INFO Fetch successful Jan 14 06:36:15.295270 coreos-metadata[1597]: Jan 14 06:36:15.295 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 06:36:15.313587 coreos-metadata[1597]: Jan 14 06:36:15.313 INFO Fetch successful Jan 14 06:36:15.313973 coreos-metadata[1597]: Jan 14 06:36:15.313 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 06:36:15.340685 coreos-metadata[1597]: Jan 14 06:36:15.340 INFO Fetch successful Jan 14 06:36:15.375712 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 06:36:15.376643 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 06:36:15.641795 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:36:15.655377 coreos-metadata[1721]: Jan 14 06:36:15.655 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:36:15.677399 coreos-metadata[1721]: Jan 14 06:36:15.677 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 06:36:15.700830 coreos-metadata[1721]: Jan 14 06:36:15.700 INFO Fetch successful Jan 14 06:36:15.700830 coreos-metadata[1721]: Jan 14 06:36:15.700 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 06:36:15.728025 coreos-metadata[1721]: Jan 14 06:36:15.727 INFO Fetch successful Jan 14 06:36:15.730336 unknown[1721]: wrote ssh authorized keys file for user: core Jan 14 06:36:15.755220 update-ssh-keys[1846]: Updated "/home/core/.ssh/authorized_keys" Jan 14 06:36:15.756655 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 06:36:15.759656 systemd[1]: Finished sshkeys.service. Jan 14 06:36:15.762853 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 06:36:15.763119 systemd[1]: Startup finished in 3.352s (kernel) + 14.182s (initrd) + 13.834s (userspace) = 31.369s. Jan 14 06:36:21.418320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 06:36:21.420840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:21.692519 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:21.706232 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:36:21.791332 kubelet[1857]: E0114 06:36:21.791239 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:36:21.796627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:36:21.796938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:36:21.797581 systemd[1]: kubelet.service: Consumed 314ms CPU time, 110.8M memory peak. Jan 14 06:36:24.419171 systemd[1]: Started sshd@4-10.230.41.14:22-20.161.92.111:34188.service - OpenSSH per-connection server daemon (20.161.92.111:34188). Jan 14 06:36:24.941903 sshd[1866]: Accepted publickey for core from 20.161.92.111 port 34188 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:24.944550 sshd-session[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:24.951823 systemd-logind[1614]: New session 7 of user core. Jan 14 06:36:24.967121 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 06:36:25.221980 sshd[1870]: Connection closed by 20.161.92.111 port 34188 Jan 14 06:36:25.222858 sshd-session[1866]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:25.229348 systemd[1]: sshd@4-10.230.41.14:22-20.161.92.111:34188.service: Deactivated successfully. Jan 14 06:36:25.231927 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 06:36:25.233103 systemd-logind[1614]: Session 7 logged out. Waiting for processes to exit. Jan 14 06:36:25.235137 systemd-logind[1614]: Removed session 7. Jan 14 06:36:25.323804 systemd[1]: Started sshd@5-10.230.41.14:22-20.161.92.111:34192.service - OpenSSH per-connection server daemon (20.161.92.111:34192). Jan 14 06:36:25.843826 sshd[1876]: Accepted publickey for core from 20.161.92.111 port 34192 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:25.845534 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:25.854354 systemd-logind[1614]: New session 8 of user core. Jan 14 06:36:25.861149 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 06:36:26.114045 sshd[1880]: Connection closed by 20.161.92.111 port 34192 Jan 14 06:36:26.115428 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:26.121163 systemd[1]: sshd@5-10.230.41.14:22-20.161.92.111:34192.service: Deactivated successfully. Jan 14 06:36:26.121208 systemd-logind[1614]: Session 8 logged out. Waiting for processes to exit. Jan 14 06:36:26.124185 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 06:36:26.127177 systemd-logind[1614]: Removed session 8. Jan 14 06:36:26.215649 systemd[1]: Started sshd@6-10.230.41.14:22-20.161.92.111:34200.service - OpenSSH per-connection server daemon (20.161.92.111:34200). Jan 14 06:36:26.728804 sshd[1886]: Accepted publickey for core from 20.161.92.111 port 34200 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:26.730660 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:26.738830 systemd-logind[1614]: New session 9 of user core. Jan 14 06:36:26.748214 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 06:36:27.005311 sshd[1890]: Connection closed by 20.161.92.111 port 34200 Jan 14 06:36:27.006568 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:27.012456 systemd[1]: sshd@6-10.230.41.14:22-20.161.92.111:34200.service: Deactivated successfully. Jan 14 06:36:27.015168 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 06:36:27.016419 systemd-logind[1614]: Session 9 logged out. Waiting for processes to exit. Jan 14 06:36:27.018861 systemd-logind[1614]: Removed session 9. Jan 14 06:36:27.108766 systemd[1]: Started sshd@7-10.230.41.14:22-20.161.92.111:34208.service - OpenSSH per-connection server daemon (20.161.92.111:34208). Jan 14 06:36:27.641050 sshd[1896]: Accepted publickey for core from 20.161.92.111 port 34208 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:27.643791 sshd-session[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:27.655809 systemd-logind[1614]: New session 10 of user core. Jan 14 06:36:27.663066 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 06:36:27.855556 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 06:36:27.856226 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:36:27.869705 sudo[1901]: pam_unix(sudo:session): session closed for user root Jan 14 06:36:27.959854 sshd[1900]: Connection closed by 20.161.92.111 port 34208 Jan 14 06:36:27.961187 sshd-session[1896]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:27.969340 systemd[1]: sshd@7-10.230.41.14:22-20.161.92.111:34208.service: Deactivated successfully. Jan 14 06:36:27.972007 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 06:36:27.973229 systemd-logind[1614]: Session 10 logged out. Waiting for processes to exit. Jan 14 06:36:27.975165 systemd-logind[1614]: Removed session 10. Jan 14 06:36:28.066395 systemd[1]: Started sshd@8-10.230.41.14:22-20.161.92.111:34210.service - OpenSSH per-connection server daemon (20.161.92.111:34210). Jan 14 06:36:28.587412 sshd[1908]: Accepted publickey for core from 20.161.92.111 port 34210 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:28.589460 sshd-session[1908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:28.596987 systemd-logind[1614]: New session 11 of user core. Jan 14 06:36:28.605020 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 06:36:28.779469 sudo[1914]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 06:36:28.779990 sudo[1914]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:36:28.792067 sudo[1914]: pam_unix(sudo:session): session closed for user root Jan 14 06:36:28.802119 sudo[1913]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 06:36:28.802626 sudo[1913]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:36:28.814967 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:36:28.884793 kernel: kauditd_printk_skb: 72 callbacks suppressed Jan 14 06:36:28.884988 kernel: audit: type=1305 audit(1768372588.880:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:36:28.880000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:36:28.880000 audit[1938]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc753b36c0 a2=420 a3=0 items=0 ppid=1919 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:28.883831 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:36:28.885505 augenrules[1938]: No rules Jan 14 06:36:28.884289 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:36:28.890057 sudo[1913]: pam_unix(sudo:session): session closed for user root Jan 14 06:36:28.880000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:36:28.891388 kernel: audit: type=1300 audit(1768372588.880:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc753b36c0 a2=420 a3=0 items=0 ppid=1919 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:28.891477 kernel: audit: type=1327 audit(1768372588.880:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:36:28.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.894272 kernel: audit: type=1130 audit(1768372588.883:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.897941 kernel: audit: type=1131 audit(1768372588.883:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.889000 audit[1913]: USER_END pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.901949 kernel: audit: type=1106 audit(1768372588.889:226): pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.889000 audit[1913]: CRED_DISP pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.906100 kernel: audit: type=1104 audit(1768372588.889:227): pid=1913 uid=500 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.981798 sshd[1912]: Connection closed by 20.161.92.111 port 34210 Jan 14 06:36:28.980815 sshd-session[1908]: pam_unix(sshd:session): session closed for user core Jan 14 06:36:28.982000 audit[1908]: USER_END pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:28.988481 systemd[1]: sshd@8-10.230.41.14:22-20.161.92.111:34210.service: Deactivated successfully. Jan 14 06:36:28.990788 kernel: audit: type=1106 audit(1768372588.982:228): pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:28.983000 audit[1908]: CRED_DISP pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:28.991295 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 06:36:28.994732 systemd-logind[1614]: Session 11 logged out. Waiting for processes to exit. Jan 14 06:36:28.996189 kernel: audit: type=1104 audit(1768372588.983:229): pid=1908 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:28.996245 kernel: audit: type=1131 audit(1768372588.988:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.41.14:22-20.161.92.111:34210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.41.14:22-20.161.92.111:34210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:28.997389 systemd-logind[1614]: Removed session 11. Jan 14 06:36:29.098079 systemd[1]: Started sshd@9-10.230.41.14:22-20.161.92.111:34224.service - OpenSSH per-connection server daemon (20.161.92.111:34224). Jan 14 06:36:29.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.41.14:22-20.161.92.111:34224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:29.617000 audit[1947]: USER_ACCT pid=1947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:29.619037 sshd[1947]: Accepted publickey for core from 20.161.92.111 port 34224 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:36:29.619000 audit[1947]: CRED_ACQ pid=1947 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:29.619000 audit[1947]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc6a84a970 a2=3 a3=0 items=0 ppid=1 pid=1947 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:29.619000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:36:29.621379 sshd-session[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:36:29.630598 systemd-logind[1614]: New session 12 of user core. Jan 14 06:36:29.641075 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 06:36:29.644000 audit[1947]: USER_START pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:29.648000 audit[1951]: CRED_ACQ pid=1951 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:36:29.808000 audit[1952]: USER_ACCT pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:29.809335 sudo[1952]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 06:36:29.808000 audit[1952]: CRED_REFR pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:29.809000 audit[1952]: USER_START pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:36:29.809848 sudo[1952]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:36:30.705660 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 06:36:30.726345 (dockerd)[1973]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 06:36:31.334273 dockerd[1973]: time="2026-01-14T06:36:31.333716773Z" level=info msg="Starting up" Jan 14 06:36:31.335811 dockerd[1973]: time="2026-01-14T06:36:31.335764788Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 06:36:31.366459 dockerd[1973]: time="2026-01-14T06:36:31.366389361Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 06:36:31.429621 dockerd[1973]: time="2026-01-14T06:36:31.429535630Z" level=info msg="Loading containers: start." Jan 14 06:36:31.454519 kernel: Initializing XFRM netlink socket Jan 14 06:36:31.561000 audit[2024]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.561000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcf2c29f20 a2=0 a3=0 items=0 ppid=1973 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.561000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:36:31.565000 audit[2026]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.565000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcd17e9710 a2=0 a3=0 items=0 ppid=1973 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:36:31.568000 audit[2028]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.568000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe77735550 a2=0 a3=0 items=0 ppid=1973 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:36:31.571000 audit[2030]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.571000 audit[2030]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd64e06b90 a2=0 a3=0 items=0 ppid=1973 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:36:31.574000 audit[2032]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.574000 audit[2032]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc9a8b7e90 a2=0 a3=0 items=0 ppid=1973 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.574000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:36:31.577000 audit[2034]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.577000 audit[2034]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffcbd1c4ab0 a2=0 a3=0 items=0 ppid=1973 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.577000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:36:31.580000 audit[2036]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.580000 audit[2036]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe94e28e00 a2=0 a3=0 items=0 ppid=1973 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.580000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:36:31.584000 audit[2038]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.584000 audit[2038]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7fff6ed57080 a2=0 a3=0 items=0 ppid=1973 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:36:31.625000 audit[2041]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.625000 audit[2041]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7ffc6c5bf560 a2=0 a3=0 items=0 ppid=1973 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 06:36:31.629000 audit[2043]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.629000 audit[2043]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc776e38d0 a2=0 a3=0 items=0 ppid=1973 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.629000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:36:31.632000 audit[2045]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.632000 audit[2045]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffcff3e6f30 a2=0 a3=0 items=0 ppid=1973 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.632000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:36:31.635000 audit[2047]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.635000 audit[2047]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fffda852620 a2=0 a3=0 items=0 ppid=1973 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.635000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:36:31.638000 audit[2049]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.638000 audit[2049]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcc64084b0 a2=0 a3=0 items=0 ppid=1973 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:36:31.695000 audit[2079]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2079 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.695000 audit[2079]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc2e5ca470 a2=0 a3=0 items=0 ppid=1973 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.695000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:36:31.698000 audit[2081]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2081 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.698000 audit[2081]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd8a672400 a2=0 a3=0 items=0 ppid=1973 pid=2081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:36:31.702000 audit[2083]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.702000 audit[2083]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1e797a40 a2=0 a3=0 items=0 ppid=1973 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:36:31.705000 audit[2085]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.705000 audit[2085]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd82adc00 a2=0 a3=0 items=0 ppid=1973 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.705000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:36:31.709000 audit[2087]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.709000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcd1e36be0 a2=0 a3=0 items=0 ppid=1973 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:36:31.712000 audit[2089]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.712000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd80cb3c40 a2=0 a3=0 items=0 ppid=1973 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:36:31.715000 audit[2091]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.715000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd270f3f80 a2=0 a3=0 items=0 ppid=1973 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.715000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:36:31.719000 audit[2093]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.719000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffde4be7730 a2=0 a3=0 items=0 ppid=1973 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.719000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:36:31.723000 audit[2095]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.723000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdb58028d0 a2=0 a3=0 items=0 ppid=1973 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.723000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 06:36:31.726000 audit[2097]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.726000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc7affd8b0 a2=0 a3=0 items=0 ppid=1973 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:36:31.729000 audit[2099]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.729000 audit[2099]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffee52f5dc0 a2=0 a3=0 items=0 ppid=1973 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.729000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:36:31.732000 audit[2101]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.732000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffedfe8b260 a2=0 a3=0 items=0 ppid=1973 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.732000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:36:31.736000 audit[2103]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.736000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffc93acb500 a2=0 a3=0 items=0 ppid=1973 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.736000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:36:31.744000 audit[2108]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2108 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.744000 audit[2108]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd3caacf40 a2=0 a3=0 items=0 ppid=1973 pid=2108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.744000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:36:31.748000 audit[2110]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2110 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.748000 audit[2110]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fff71797440 a2=0 a3=0 items=0 ppid=1973 pid=2110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.748000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:36:31.751000 audit[2112]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.751000 audit[2112]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffcd719c260 a2=0 a3=0 items=0 ppid=1973 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.751000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:36:31.754000 audit[2114]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.754000 audit[2114]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffc0f48970 a2=0 a3=0 items=0 ppid=1973 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.754000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:36:31.758000 audit[2116]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.758000 audit[2116]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdcf16e650 a2=0 a3=0 items=0 ppid=1973 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:36:31.761000 audit[2118]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:36:31.761000 audit[2118]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe5e2d9990 a2=0 a3=0 items=0 ppid=1973 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:36:31.772100 systemd-timesyncd[1519]: Network configuration changed, trying to establish connection. Jan 14 06:36:31.787000 audit[2122]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.787000 audit[2122]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7fff1f805e20 a2=0 a3=0 items=0 ppid=1973 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 06:36:31.791000 audit[2124]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.791000 audit[2124]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffedc363bf0 a2=0 a3=0 items=0 ppid=1973 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 06:36:31.805000 audit[2132]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.805000 audit[2132]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7fff9f60afa0 a2=0 a3=0 items=0 ppid=1973 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 06:36:31.819000 audit[2138]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.819000 audit[2138]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff75a84a00 a2=0 a3=0 items=0 ppid=1973 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.819000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 06:36:31.823000 audit[2140]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.823000 audit[2140]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7fff27b71f60 a2=0 a3=0 items=0 ppid=1973 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.823000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 06:36:31.826000 audit[2142]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.826000 audit[2142]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdd133a940 a2=0 a3=0 items=0 ppid=1973 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.826000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 06:36:31.829000 audit[2144]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.829000 audit[2144]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffcc56be5b0 a2=0 a3=0 items=0 ppid=1973 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.829000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:36:31.833000 audit[2146]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:36:31.833000 audit[2146]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe07bacac0 a2=0 a3=0 items=0 ppid=1973 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:36:31.833000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 06:36:31.834928 systemd-networkd[1552]: docker0: Link UP Jan 14 06:36:31.839585 dockerd[1973]: time="2026-01-14T06:36:31.839468854Z" level=info msg="Loading containers: done." Jan 14 06:36:31.874171 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck69124730-merged.mount: Deactivated successfully. Jan 14 06:36:31.878511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 06:36:31.881564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:31.883796 dockerd[1973]: time="2026-01-14T06:36:31.882594299Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 06:36:31.883796 dockerd[1973]: time="2026-01-14T06:36:31.883150704Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 06:36:31.883796 dockerd[1973]: time="2026-01-14T06:36:31.883383797Z" level=info msg="Initializing buildkit" Jan 14 06:36:31.913558 dockerd[1973]: time="2026-01-14T06:36:31.913503527Z" level=info msg="Completed buildkit initialization" Jan 14 06:36:31.930798 dockerd[1973]: time="2026-01-14T06:36:31.930578483Z" level=info msg="Daemon has completed initialization" Jan 14 06:36:31.931364 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 06:36:31.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:31.932729 dockerd[1973]: time="2026-01-14T06:36:31.932155916Z" level=info msg="API listen on /run/docker.sock" Jan 14 06:36:32.957116 systemd-resolved[1300]: Clock change detected. Flushing caches. Jan 14 06:36:32.957553 systemd-timesyncd[1519]: Contacted time server [2a01:7e00::f03c:91ff:fe69:38e7]:123 (2.flatcar.pool.ntp.org). Jan 14 06:36:32.957630 systemd-timesyncd[1519]: Initial clock synchronization to Wed 2026-01-14 06:36:32.956168 UTC. Jan 14 06:36:33.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:33.316240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:33.335818 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:36:33.454078 kubelet[2192]: E0114 06:36:33.453933 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:36:33.458001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:36:33.458266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:36:33.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:36:33.459374 systemd[1]: kubelet.service: Consumed 735ms CPU time, 110.1M memory peak. Jan 14 06:36:34.315308 containerd[1642]: time="2026-01-14T06:36:34.315092053Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 06:36:35.511459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2280573619.mount: Deactivated successfully. Jan 14 06:36:38.332298 containerd[1642]: time="2026-01-14T06:36:38.331804917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:38.335840 containerd[1642]: time="2026-01-14T06:36:38.335776055Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=27401903" Jan 14 06:36:38.336727 containerd[1642]: time="2026-01-14T06:36:38.336646200Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:38.341077 containerd[1642]: time="2026-01-14T06:36:38.341012316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:38.343128 containerd[1642]: time="2026-01-14T06:36:38.342555769Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 4.027202109s" Jan 14 06:36:38.343128 containerd[1642]: time="2026-01-14T06:36:38.342634010Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 14 06:36:38.344493 containerd[1642]: time="2026-01-14T06:36:38.344464422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 06:36:39.771715 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 06:36:39.771882 kernel: audit: type=1131 audit(1768372599.765:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:39.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:39.765056 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 06:36:39.783317 kernel: audit: type=1334 audit(1768372599.781:284): prog-id=61 op=UNLOAD Jan 14 06:36:39.781000 audit: BPF prog-id=61 op=UNLOAD Jan 14 06:36:41.327860 containerd[1642]: time="2026-01-14T06:36:41.327765098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:41.330362 containerd[1642]: time="2026-01-14T06:36:41.330318769Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24985199" Jan 14 06:36:41.330952 containerd[1642]: time="2026-01-14T06:36:41.330903873Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:41.335460 containerd[1642]: time="2026-01-14T06:36:41.335415480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:41.337796 containerd[1642]: time="2026-01-14T06:36:41.337610556Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 2.993013784s" Jan 14 06:36:41.337796 containerd[1642]: time="2026-01-14T06:36:41.337654522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 14 06:36:41.338958 containerd[1642]: time="2026-01-14T06:36:41.338925950Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 06:36:43.696382 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 06:36:43.701617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:43.799297 containerd[1642]: time="2026-01-14T06:36:43.799118877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:43.801684 containerd[1642]: time="2026-01-14T06:36:43.801649903Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19396939" Jan 14 06:36:43.802896 containerd[1642]: time="2026-01-14T06:36:43.802844453Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:43.898148 containerd[1642]: time="2026-01-14T06:36:43.898078639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:43.899385 containerd[1642]: time="2026-01-14T06:36:43.899340469Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 2.560236364s" Jan 14 06:36:43.899576 containerd[1642]: time="2026-01-14T06:36:43.899401024Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 14 06:36:43.900523 containerd[1642]: time="2026-01-14T06:36:43.900465688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 06:36:43.985111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:43.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:43.991417 kernel: audit: type=1130 audit(1768372603.985:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:44.006344 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:36:44.094119 kubelet[2278]: E0114 06:36:44.093999 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:36:44.097087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:36:44.097447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:36:44.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:36:44.098116 systemd[1]: kubelet.service: Consumed 322ms CPU time, 110.3M memory peak. Jan 14 06:36:44.102329 kernel: audit: type=1131 audit(1768372604.097:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:36:45.927646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437613724.mount: Deactivated successfully. Jan 14 06:36:46.798438 kernel: audit: type=1130 audit(1768372606.790:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.41.14:22-64.225.73.213:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:46.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.41.14:22-64.225.73.213:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:46.790577 systemd[1]: Started sshd@10-10.230.41.14:22-64.225.73.213:37504.service - OpenSSH per-connection server daemon (64.225.73.213:37504). Jan 14 06:36:46.937522 sshd[2295]: Invalid user postgres from 64.225.73.213 port 37504 Jan 14 06:36:46.953474 sshd[2295]: Connection closed by invalid user postgres 64.225.73.213 port 37504 [preauth] Jan 14 06:36:46.953000 audit[2295]: USER_ERR pid=2295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:36:46.959300 kernel: audit: type=1109 audit(1768372606.953:288): pid=2295 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:36:46.961750 systemd[1]: sshd@10-10.230.41.14:22-64.225.73.213:37504.service: Deactivated successfully. Jan 14 06:36:46.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.41.14:22-64.225.73.213:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:46.967298 kernel: audit: type=1131 audit(1768372606.962:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.41.14:22-64.225.73.213:37504 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:47.257732 containerd[1642]: time="2026-01-14T06:36:47.257614702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:47.259822 containerd[1642]: time="2026-01-14T06:36:47.259769245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31158177" Jan 14 06:36:47.260549 containerd[1642]: time="2026-01-14T06:36:47.260500363Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:47.263498 containerd[1642]: time="2026-01-14T06:36:47.263454599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:47.265375 containerd[1642]: time="2026-01-14T06:36:47.265227722Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.364714358s" Jan 14 06:36:47.265375 containerd[1642]: time="2026-01-14T06:36:47.265298688Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 14 06:36:47.266474 containerd[1642]: time="2026-01-14T06:36:47.266333354Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 06:36:47.920823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1537333002.mount: Deactivated successfully. Jan 14 06:36:49.477335 containerd[1642]: time="2026-01-14T06:36:49.476820084Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:49.479721 containerd[1642]: time="2026-01-14T06:36:49.479692206Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=17572350" Jan 14 06:36:49.481387 containerd[1642]: time="2026-01-14T06:36:49.481351095Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:49.485316 containerd[1642]: time="2026-01-14T06:36:49.485252696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:49.488421 containerd[1642]: time="2026-01-14T06:36:49.488292837Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.221774575s" Jan 14 06:36:49.488421 containerd[1642]: time="2026-01-14T06:36:49.488347992Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 14 06:36:49.489225 containerd[1642]: time="2026-01-14T06:36:49.488985643Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 06:36:50.550915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405459509.mount: Deactivated successfully. Jan 14 06:36:50.575917 containerd[1642]: time="2026-01-14T06:36:50.575796681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:36:50.577873 containerd[1642]: time="2026-01-14T06:36:50.577827588Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 06:36:50.579317 containerd[1642]: time="2026-01-14T06:36:50.579108204Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:36:50.582070 containerd[1642]: time="2026-01-14T06:36:50.581989167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:36:50.584151 containerd[1642]: time="2026-01-14T06:36:50.583347480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.094322595s" Jan 14 06:36:50.584151 containerd[1642]: time="2026-01-14T06:36:50.583405230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 06:36:50.584790 containerd[1642]: time="2026-01-14T06:36:50.584747325Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 06:36:51.196700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100220443.mount: Deactivated successfully. Jan 14 06:36:53.175671 update_engine[1618]: I20260114 06:36:53.175482 1618 update_attempter.cc:509] Updating boot flags... Jan 14 06:36:54.194653 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 06:36:54.198521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:54.429982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:54.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:54.443400 kernel: audit: type=1130 audit(1768372614.429:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:54.456872 (kubelet)[2434]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:36:54.606582 kubelet[2434]: E0114 06:36:54.606510 2434 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:36:54.611829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:36:54.612090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:36:54.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:36:54.613720 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.7M memory peak. Jan 14 06:36:54.618324 kernel: audit: type=1131 audit(1768372614.613:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:36:56.067145 containerd[1642]: time="2026-01-14T06:36:56.066817143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:56.069979 containerd[1642]: time="2026-01-14T06:36:56.069342824Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=55728979" Jan 14 06:36:56.071031 containerd[1642]: time="2026-01-14T06:36:56.070991914Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:56.076474 containerd[1642]: time="2026-01-14T06:36:56.076426781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:36:56.078005 containerd[1642]: time="2026-01-14T06:36:56.077963106Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.493161505s" Jan 14 06:36:56.078202 containerd[1642]: time="2026-01-14T06:36:56.078173688Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 14 06:36:59.803717 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:36:59.803980 systemd[1]: kubelet.service: Consumed 272ms CPU time, 108.7M memory peak. Jan 14 06:36:59.818246 kernel: audit: type=1130 audit(1768372619.803:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:59.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:59.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:59.823345 kernel: audit: type=1131 audit(1768372619.803:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:36:59.826595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:36:59.862332 systemd[1]: Reload requested from client PID 2470 ('systemctl') (unit session-12.scope)... Jan 14 06:36:59.862403 systemd[1]: Reloading... Jan 14 06:37:00.068384 zram_generator::config[2514]: No configuration found. Jan 14 06:37:00.439503 systemd[1]: Reloading finished in 576 ms. Jan 14 06:37:00.477000 audit: BPF prog-id=65 op=LOAD Jan 14 06:37:00.480354 kernel: audit: type=1334 audit(1768372620.477:294): prog-id=65 op=LOAD Jan 14 06:37:00.478000 audit: BPF prog-id=57 op=UNLOAD Jan 14 06:37:00.484319 kernel: audit: type=1334 audit(1768372620.478:295): prog-id=57 op=UNLOAD Jan 14 06:37:00.484411 kernel: audit: type=1334 audit(1768372620.481:296): prog-id=66 op=LOAD Jan 14 06:37:00.481000 audit: BPF prog-id=66 op=LOAD Jan 14 06:37:00.481000 audit: BPF prog-id=67 op=LOAD Jan 14 06:37:00.486927 kernel: audit: type=1334 audit(1768372620.481:297): prog-id=67 op=LOAD Jan 14 06:37:00.481000 audit: BPF prog-id=54 op=UNLOAD Jan 14 06:37:00.491322 kernel: audit: type=1334 audit(1768372620.481:298): prog-id=54 op=UNLOAD Jan 14 06:37:00.491409 kernel: audit: type=1334 audit(1768372620.481:299): prog-id=55 op=UNLOAD Jan 14 06:37:00.491461 kernel: audit: type=1334 audit(1768372620.482:300): prog-id=68 op=LOAD Jan 14 06:37:00.481000 audit: BPF prog-id=55 op=UNLOAD Jan 14 06:37:00.482000 audit: BPF prog-id=68 op=LOAD Jan 14 06:37:00.482000 audit: BPF prog-id=48 op=UNLOAD Jan 14 06:37:00.493903 kernel: audit: type=1334 audit(1768372620.482:301): prog-id=48 op=UNLOAD Jan 14 06:37:00.482000 audit: BPF prog-id=69 op=LOAD Jan 14 06:37:00.482000 audit: BPF prog-id=70 op=LOAD Jan 14 06:37:00.482000 audit: BPF prog-id=49 op=UNLOAD Jan 14 06:37:00.482000 audit: BPF prog-id=50 op=UNLOAD Jan 14 06:37:00.484000 audit: BPF prog-id=71 op=LOAD Jan 14 06:37:00.484000 audit: BPF prog-id=44 op=UNLOAD Jan 14 06:37:00.485000 audit: BPF prog-id=72 op=LOAD Jan 14 06:37:00.485000 audit: BPF prog-id=56 op=UNLOAD Jan 14 06:37:00.487000 audit: BPF prog-id=73 op=LOAD Jan 14 06:37:00.487000 audit: BPF prog-id=41 op=UNLOAD Jan 14 06:37:00.487000 audit: BPF prog-id=74 op=LOAD Jan 14 06:37:00.487000 audit: BPF prog-id=75 op=LOAD Jan 14 06:37:00.487000 audit: BPF prog-id=42 op=UNLOAD Jan 14 06:37:00.487000 audit: BPF prog-id=43 op=UNLOAD Jan 14 06:37:00.488000 audit: BPF prog-id=76 op=LOAD Jan 14 06:37:00.488000 audit: BPF prog-id=51 op=UNLOAD Jan 14 06:37:00.488000 audit: BPF prog-id=77 op=LOAD Jan 14 06:37:00.488000 audit: BPF prog-id=78 op=LOAD Jan 14 06:37:00.488000 audit: BPF prog-id=52 op=UNLOAD Jan 14 06:37:00.488000 audit: BPF prog-id=53 op=UNLOAD Jan 14 06:37:00.491000 audit: BPF prog-id=79 op=LOAD Jan 14 06:37:00.493000 audit: BPF prog-id=58 op=UNLOAD Jan 14 06:37:00.494000 audit: BPF prog-id=80 op=LOAD Jan 14 06:37:00.494000 audit: BPF prog-id=81 op=LOAD Jan 14 06:37:00.494000 audit: BPF prog-id=59 op=UNLOAD Jan 14 06:37:00.494000 audit: BPF prog-id=60 op=UNLOAD Jan 14 06:37:00.495000 audit: BPF prog-id=82 op=LOAD Jan 14 06:37:00.495000 audit: BPF prog-id=45 op=UNLOAD Jan 14 06:37:00.495000 audit: BPF prog-id=83 op=LOAD Jan 14 06:37:00.495000 audit: BPF prog-id=84 op=LOAD Jan 14 06:37:00.495000 audit: BPF prog-id=46 op=UNLOAD Jan 14 06:37:00.495000 audit: BPF prog-id=47 op=UNLOAD Jan 14 06:37:00.500000 audit: BPF prog-id=85 op=LOAD Jan 14 06:37:00.500000 audit: BPF prog-id=64 op=UNLOAD Jan 14 06:37:00.522935 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 06:37:00.523074 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 06:37:00.523644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:37:00.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:37:00.523757 systemd[1]: kubelet.service: Consumed 314ms CPU time, 98.6M memory peak. Jan 14 06:37:00.526445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:37:00.732426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:37:00.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:00.766066 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:37:00.847602 kubelet[2584]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:37:00.847602 kubelet[2584]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:37:00.847602 kubelet[2584]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:37:00.848186 kubelet[2584]: I0114 06:37:00.847774 2584 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:37:01.383774 kubelet[2584]: I0114 06:37:01.383686 2584 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 06:37:01.383774 kubelet[2584]: I0114 06:37:01.383737 2584 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:37:01.384143 kubelet[2584]: I0114 06:37:01.384112 2584 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 06:37:01.441893 kubelet[2584]: E0114 06:37:01.439971 2584 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.41.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:01.444168 kubelet[2584]: I0114 06:37:01.444133 2584 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:37:01.458577 kubelet[2584]: I0114 06:37:01.458529 2584 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:37:01.472465 kubelet[2584]: I0114 06:37:01.472439 2584 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:37:01.475599 kubelet[2584]: I0114 06:37:01.475549 2584 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:37:01.476005 kubelet[2584]: I0114 06:37:01.475691 2584 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2u6n8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:37:01.478076 kubelet[2584]: I0114 06:37:01.477953 2584 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:37:01.478076 kubelet[2584]: I0114 06:37:01.477988 2584 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 06:37:01.480809 kubelet[2584]: I0114 06:37:01.480692 2584 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:37:01.484968 kubelet[2584]: I0114 06:37:01.484944 2584 kubelet.go:446] "Attempting to sync node with API server" Jan 14 06:37:01.485151 kubelet[2584]: I0114 06:37:01.485129 2584 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:37:01.489513 kubelet[2584]: I0114 06:37:01.489350 2584 kubelet.go:352] "Adding apiserver pod source" Jan 14 06:37:01.489513 kubelet[2584]: I0114 06:37:01.489441 2584 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:37:01.496157 kubelet[2584]: W0114 06:37:01.495758 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.41.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2u6n8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:01.496157 kubelet[2584]: E0114 06:37:01.495866 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.41.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2u6n8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:01.497139 kubelet[2584]: W0114 06:37:01.496604 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.41.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:01.497139 kubelet[2584]: E0114 06:37:01.496707 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.41.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:01.498325 kubelet[2584]: I0114 06:37:01.498129 2584 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:37:01.501870 kubelet[2584]: I0114 06:37:01.501743 2584 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 06:37:01.507392 kubelet[2584]: W0114 06:37:01.507366 2584 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 06:37:01.510194 kubelet[2584]: I0114 06:37:01.509976 2584 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:37:01.510194 kubelet[2584]: I0114 06:37:01.510040 2584 server.go:1287] "Started kubelet" Jan 14 06:37:01.517001 kubelet[2584]: I0114 06:37:01.516735 2584 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:37:01.517562 kubelet[2584]: I0114 06:37:01.517469 2584 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:37:01.518227 kubelet[2584]: I0114 06:37:01.518204 2584 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:37:01.524100 kubelet[2584]: I0114 06:37:01.523379 2584 server.go:479] "Adding debug handlers to kubelet server" Jan 14 06:37:01.525619 kubelet[2584]: E0114 06:37:01.522783 2584 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.41.14:6443/api/v1/namespaces/default/events\": dial tcp 10.230.41.14:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-2u6n8.gb1.brightbox.com.188a8587afa14fc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-2u6n8.gb1.brightbox.com,UID:srv-2u6n8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-2u6n8.gb1.brightbox.com,},FirstTimestamp:2026-01-14 06:37:01.510004681 +0000 UTC m=+0.733159962,LastTimestamp:2026-01-14 06:37:01.510004681 +0000 UTC m=+0.733159962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-2u6n8.gb1.brightbox.com,}" Jan 14 06:37:01.530580 kubelet[2584]: I0114 06:37:01.530556 2584 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:37:01.539000 audit[2595]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.539000 audit[2595]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe94c35120 a2=0 a3=0 items=0 ppid=2584 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.539000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:37:01.541000 audit[2596]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.541000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2c8a1800 a2=0 a3=0 items=0 ppid=2584 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.541000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:37:01.544319 kubelet[2584]: I0114 06:37:01.544292 2584 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:37:01.547822 kubelet[2584]: I0114 06:37:01.547800 2584 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:37:01.548267 kubelet[2584]: E0114 06:37:01.548240 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" Jan 14 06:37:01.550000 audit[2598]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.550000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffd139c7ca0 a2=0 a3=0 items=0 ppid=2584 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.550000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:37:01.551965 kubelet[2584]: E0114 06:37:01.551146 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.41.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2u6n8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.41.14:6443: connect: connection refused" interval="200ms" Jan 14 06:37:01.551965 kubelet[2584]: I0114 06:37:01.551393 2584 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:37:01.552376 kubelet[2584]: W0114 06:37:01.552107 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.41.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:01.552376 kubelet[2584]: E0114 06:37:01.552183 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.41.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:01.555671 kubelet[2584]: I0114 06:37:01.554544 2584 factory.go:221] Registration of the systemd container factory successfully Jan 14 06:37:01.555671 kubelet[2584]: I0114 06:37:01.554678 2584 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:37:01.556619 kubelet[2584]: I0114 06:37:01.556598 2584 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:37:01.557000 audit[2600]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.557000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc0368acc0 a2=0 a3=0 items=0 ppid=2584 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.557000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:37:01.559567 kubelet[2584]: I0114 06:37:01.559538 2584 factory.go:221] Registration of the containerd container factory successfully Jan 14 06:37:01.579000 audit[2604]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.579000 audit[2604]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffee5bfd130 a2=0 a3=0 items=0 ppid=2584 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 06:37:01.581950 kubelet[2584]: I0114 06:37:01.581872 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 06:37:01.582234 kubelet[2584]: E0114 06:37:01.582170 2584 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:37:01.583000 audit[2607]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2607 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:01.583000 audit[2607]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc2022c020 a2=0 a3=0 items=0 ppid=2584 pid=2607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:37:01.584151 kubelet[2584]: I0114 06:37:01.583863 2584 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 06:37:01.584151 kubelet[2584]: I0114 06:37:01.583906 2584 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 06:37:01.584151 kubelet[2584]: I0114 06:37:01.583962 2584 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:37:01.584151 kubelet[2584]: I0114 06:37:01.583975 2584 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 06:37:01.584151 kubelet[2584]: E0114 06:37:01.584072 2584 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:37:01.585000 audit[2608]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.585000 audit[2608]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc77fd9550 a2=0 a3=0 items=0 ppid=2584 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.585000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:37:01.586000 audit[2609]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.586000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc9b595c0 a2=0 a3=0 items=0 ppid=2584 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.586000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:37:01.588000 audit[2610]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:01.588000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7722d130 a2=0 a3=0 items=0 ppid=2584 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:37:01.589000 audit[2611]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:01.589000 audit[2611]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcec57ee70 a2=0 a3=0 items=0 ppid=2584 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:37:01.591000 audit[2612]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2612 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:01.591000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd614b6ab0 a2=0 a3=0 items=0 ppid=2584 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.591000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:37:01.592000 audit[2613]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:01.592000 audit[2613]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe5a7d68f0 a2=0 a3=0 items=0 ppid=2584 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:01.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:37:01.595354 kubelet[2584]: W0114 06:37:01.593974 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.41.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:01.595354 kubelet[2584]: E0114 06:37:01.594028 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.41.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:01.602393 kubelet[2584]: I0114 06:37:01.601966 2584 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:37:01.602393 kubelet[2584]: I0114 06:37:01.601993 2584 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:37:01.602393 kubelet[2584]: I0114 06:37:01.602031 2584 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:37:01.603881 kubelet[2584]: I0114 06:37:01.603857 2584 policy_none.go:49] "None policy: Start" Jan 14 06:37:01.604015 kubelet[2584]: I0114 06:37:01.603994 2584 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:37:01.604157 kubelet[2584]: I0114 06:37:01.604138 2584 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:37:01.614886 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 06:37:01.633115 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 06:37:01.639852 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 06:37:01.649076 kubelet[2584]: E0114 06:37:01.648478 2584 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" Jan 14 06:37:01.650052 kubelet[2584]: I0114 06:37:01.650025 2584 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 06:37:01.650486 kubelet[2584]: I0114 06:37:01.650464 2584 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:37:01.650659 kubelet[2584]: I0114 06:37:01.650588 2584 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:37:01.652973 kubelet[2584]: I0114 06:37:01.652942 2584 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:37:01.655457 kubelet[2584]: E0114 06:37:01.655423 2584 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:37:01.655755 kubelet[2584]: E0114 06:37:01.655714 2584 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-2u6n8.gb1.brightbox.com\" not found" Jan 14 06:37:01.706046 systemd[1]: Created slice kubepods-burstable-pod093e3e0f4d06660d5eb6b129b8638b40.slice - libcontainer container kubepods-burstable-pod093e3e0f4d06660d5eb6b129b8638b40.slice. Jan 14 06:37:01.718662 kubelet[2584]: E0114 06:37:01.718181 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.723386 systemd[1]: Created slice kubepods-burstable-pod62753937c92d99b736b4039cd801fd05.slice - libcontainer container kubepods-burstable-pod62753937c92d99b736b4039cd801fd05.slice. Jan 14 06:37:01.726527 kubelet[2584]: E0114 06:37:01.726233 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.730797 systemd[1]: Created slice kubepods-burstable-pod105be9a305f327d3eb095dc7ff144413.slice - libcontainer container kubepods-burstable-pod105be9a305f327d3eb095dc7ff144413.slice. Jan 14 06:37:01.733316 kubelet[2584]: E0114 06:37:01.733260 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.753149 kubelet[2584]: E0114 06:37:01.752346 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.41.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2u6n8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.41.14:6443: connect: connection refused" interval="400ms" Jan 14 06:37:01.755138 kubelet[2584]: I0114 06:37:01.755101 2584 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.755607 kubelet[2584]: E0114 06:37:01.755569 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.41.14:6443/api/v1/nodes\": dial tcp 10.230.41.14:6443: connect: connection refused" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.757933 kubelet[2584]: I0114 06:37:01.757881 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-k8s-certs\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758071 kubelet[2584]: I0114 06:37:01.758036 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-kubeconfig\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758170 kubelet[2584]: I0114 06:37:01.758135 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758248 kubelet[2584]: I0114 06:37:01.758187 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-k8s-certs\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758248 kubelet[2584]: I0114 06:37:01.758229 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-flexvolume-dir\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758347 kubelet[2584]: I0114 06:37:01.758254 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/105be9a305f327d3eb095dc7ff144413-kubeconfig\") pod \"kube-scheduler-srv-2u6n8.gb1.brightbox.com\" (UID: \"105be9a305f327d3eb095dc7ff144413\") " pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758347 kubelet[2584]: I0114 06:37:01.758278 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-ca-certs\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758347 kubelet[2584]: I0114 06:37:01.758319 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.758347 kubelet[2584]: I0114 06:37:01.758344 2584 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-ca-certs\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.959397 kubelet[2584]: I0114 06:37:01.959220 2584 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:01.960012 kubelet[2584]: E0114 06:37:01.959959 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.41.14:6443/api/v1/nodes\": dial tcp 10.230.41.14:6443: connect: connection refused" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:02.022103 containerd[1642]: time="2026-01-14T06:37:02.022021369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2u6n8.gb1.brightbox.com,Uid:093e3e0f4d06660d5eb6b129b8638b40,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:02.029026 containerd[1642]: time="2026-01-14T06:37:02.028794771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2u6n8.gb1.brightbox.com,Uid:62753937c92d99b736b4039cd801fd05,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:02.035879 containerd[1642]: time="2026-01-14T06:37:02.035843313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2u6n8.gb1.brightbox.com,Uid:105be9a305f327d3eb095dc7ff144413,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:02.156170 kubelet[2584]: E0114 06:37:02.155027 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.41.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2u6n8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.41.14:6443: connect: connection refused" interval="800ms" Jan 14 06:37:02.177648 containerd[1642]: time="2026-01-14T06:37:02.177546494Z" level=info msg="connecting to shim a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74" address="unix:///run/containerd/s/31f3b8989b0ceedc8b1205dc11b59f3bb68d93c96c7d86f10d3ed370147e2b4c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:02.177925 containerd[1642]: time="2026-01-14T06:37:02.177866896Z" level=info msg="connecting to shim d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0" address="unix:///run/containerd/s/ad890cb0c7f60d02a3a0fe16c3efb55d73a542d4800daae0639cac8535258b9b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:02.182934 containerd[1642]: time="2026-01-14T06:37:02.182510440Z" level=info msg="connecting to shim e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd" address="unix:///run/containerd/s/0d200f2da87a9f009bc9dc9d1b2a3320bd79ba3eb071b92b644e38f4b702716d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:02.323772 systemd[1]: Started cri-containerd-a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74.scope - libcontainer container a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74. Jan 14 06:37:02.336574 systemd[1]: Started cri-containerd-d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0.scope - libcontainer container d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0. Jan 14 06:37:02.340238 systemd[1]: Started cri-containerd-e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd.scope - libcontainer container e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd. Jan 14 06:37:02.364056 kubelet[2584]: I0114 06:37:02.363995 2584 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:02.364637 kubelet[2584]: E0114 06:37:02.364593 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.41.14:6443/api/v1/nodes\": dial tcp 10.230.41.14:6443: connect: connection refused" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:02.385000 audit: BPF prog-id=86 op=LOAD Jan 14 06:37:02.386000 audit: BPF prog-id=87 op=LOAD Jan 14 06:37:02.386000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.386000 audit: BPF prog-id=87 op=UNLOAD Jan 14 06:37:02.386000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.386000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.391000 audit: BPF prog-id=88 op=LOAD Jan 14 06:37:02.391000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.391000 audit: BPF prog-id=89 op=LOAD Jan 14 06:37:02.391000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.391000 audit: BPF prog-id=89 op=UNLOAD Jan 14 06:37:02.391000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.391000 audit: BPF prog-id=88 op=UNLOAD Jan 14 06:37:02.391000 audit[2669]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.391000 audit: BPF prog-id=90 op=LOAD Jan 14 06:37:02.391000 audit[2669]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2638 pid=2669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130373731633930376263333261383064396337343833373864613735 Jan 14 06:37:02.392000 audit: BPF prog-id=91 op=LOAD Jan 14 06:37:02.394000 audit: BPF prog-id=92 op=LOAD Jan 14 06:37:02.394000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.394000 audit: BPF prog-id=92 op=UNLOAD Jan 14 06:37:02.394000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.394000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.396000 audit: BPF prog-id=93 op=LOAD Jan 14 06:37:02.396000 audit: BPF prog-id=94 op=LOAD Jan 14 06:37:02.396000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.396000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.397000 audit: BPF prog-id=95 op=LOAD Jan 14 06:37:02.397000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.397000 audit: BPF prog-id=95 op=UNLOAD Jan 14 06:37:02.397000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.397000 audit: BPF prog-id=94 op=UNLOAD Jan 14 06:37:02.397000 audit[2679]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.397000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.398000 audit: BPF prog-id=96 op=LOAD Jan 14 06:37:02.398000 audit[2679]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2644 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.398000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531613765303537316264323536363064376337353238653132306531 Jan 14 06:37:02.403000 audit: BPF prog-id=97 op=LOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.403000 audit: BPF prog-id=97 op=UNLOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.403000 audit: BPF prog-id=98 op=LOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.403000 audit: BPF prog-id=99 op=LOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.403000 audit: BPF prog-id=99 op=UNLOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.403000 audit: BPF prog-id=98 op=UNLOAD Jan 14 06:37:02.403000 audit[2676]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.404000 audit: BPF prog-id=100 op=LOAD Jan 14 06:37:02.404000 audit[2676]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2641 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437306362303231663830643934306638643838333633323535646638 Jan 14 06:37:02.439093 kubelet[2584]: W0114 06:37:02.439007 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.41.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:02.439426 kubelet[2584]: E0114 06:37:02.439362 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.41.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:02.495335 containerd[1642]: time="2026-01-14T06:37:02.495175274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-2u6n8.gb1.brightbox.com,Uid:105be9a305f327d3eb095dc7ff144413,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74\"" Jan 14 06:37:02.502569 containerd[1642]: time="2026-01-14T06:37:02.502450226Z" level=info msg="CreateContainer within sandbox \"a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 06:37:02.525046 containerd[1642]: time="2026-01-14T06:37:02.524567941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-2u6n8.gb1.brightbox.com,Uid:62753937c92d99b736b4039cd801fd05,Namespace:kube-system,Attempt:0,} returns sandbox id \"d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0\"" Jan 14 06:37:02.528302 containerd[1642]: time="2026-01-14T06:37:02.527837917Z" level=info msg="Container 5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:02.528772 containerd[1642]: time="2026-01-14T06:37:02.527846972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-2u6n8.gb1.brightbox.com,Uid:093e3e0f4d06660d5eb6b129b8638b40,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd\"" Jan 14 06:37:02.532340 containerd[1642]: time="2026-01-14T06:37:02.532112381Z" level=info msg="CreateContainer within sandbox \"d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 06:37:02.533368 containerd[1642]: time="2026-01-14T06:37:02.533331568Z" level=info msg="CreateContainer within sandbox \"e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 06:37:02.536372 containerd[1642]: time="2026-01-14T06:37:02.536323167Z" level=info msg="CreateContainer within sandbox \"a0771c907bc32a80d9c748378da7539bc796ecfacdffb481888d748dd7eadd74\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4\"" Jan 14 06:37:02.537513 containerd[1642]: time="2026-01-14T06:37:02.537481936Z" level=info msg="StartContainer for \"5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4\"" Jan 14 06:37:02.539295 containerd[1642]: time="2026-01-14T06:37:02.539226108Z" level=info msg="connecting to shim 5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4" address="unix:///run/containerd/s/31f3b8989b0ceedc8b1205dc11b59f3bb68d93c96c7d86f10d3ed370147e2b4c" protocol=ttrpc version=3 Jan 14 06:37:02.547839 containerd[1642]: time="2026-01-14T06:37:02.547637283Z" level=info msg="Container af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:02.553486 containerd[1642]: time="2026-01-14T06:37:02.553424911Z" level=info msg="Container a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:02.557630 containerd[1642]: time="2026-01-14T06:37:02.557502256Z" level=info msg="CreateContainer within sandbox \"e1a7e0571bd25660d7c7528e120e142086b2c9d8cdbf974f0ef125605433c0fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927\"" Jan 14 06:37:02.559797 containerd[1642]: time="2026-01-14T06:37:02.559735045Z" level=info msg="StartContainer for \"af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927\"" Jan 14 06:37:02.562522 containerd[1642]: time="2026-01-14T06:37:02.562482058Z" level=info msg="connecting to shim af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927" address="unix:///run/containerd/s/0d200f2da87a9f009bc9dc9d1b2a3320bd79ba3eb071b92b644e38f4b702716d" protocol=ttrpc version=3 Jan 14 06:37:02.569221 containerd[1642]: time="2026-01-14T06:37:02.569147878Z" level=info msg="CreateContainer within sandbox \"d70cb021f80d940f8d88363255df80691457bfd2307bba1d2eba743be5f42ce0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1\"" Jan 14 06:37:02.572033 containerd[1642]: time="2026-01-14T06:37:02.571919816Z" level=info msg="StartContainer for \"a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1\"" Jan 14 06:37:02.576677 containerd[1642]: time="2026-01-14T06:37:02.576527467Z" level=info msg="connecting to shim a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1" address="unix:///run/containerd/s/ad890cb0c7f60d02a3a0fe16c3efb55d73a542d4800daae0639cac8535258b9b" protocol=ttrpc version=3 Jan 14 06:37:02.578553 systemd[1]: Started cri-containerd-5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4.scope - libcontainer container 5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4. Jan 14 06:37:02.618472 kubelet[2584]: W0114 06:37:02.613357 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.41.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:02.618472 kubelet[2584]: E0114 06:37:02.617584 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.41.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:02.620826 systemd[1]: Started cri-containerd-a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1.scope - libcontainer container a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1. Jan 14 06:37:02.633000 audit: BPF prog-id=101 op=LOAD Jan 14 06:37:02.635000 audit: BPF prog-id=102 op=LOAD Jan 14 06:37:02.635000 audit[2753]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.635000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.637000 audit: BPF prog-id=102 op=UNLOAD Jan 14 06:37:02.637000 audit[2753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.638000 audit: BPF prog-id=103 op=LOAD Jan 14 06:37:02.638000 audit[2753]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.638000 audit: BPF prog-id=104 op=LOAD Jan 14 06:37:02.638000 audit[2753]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.638000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.639000 audit: BPF prog-id=104 op=UNLOAD Jan 14 06:37:02.639000 audit[2753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.639000 audit: BPF prog-id=103 op=UNLOAD Jan 14 06:37:02.639000 audit[2753]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.639000 audit: BPF prog-id=105 op=LOAD Jan 14 06:37:02.639000 audit[2753]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2638 pid=2753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.639000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3565663162633163616331643961666461653433613062343336313164 Jan 14 06:37:02.643564 systemd[1]: Started cri-containerd-af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927.scope - libcontainer container af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927. Jan 14 06:37:02.697000 audit: BPF prog-id=106 op=LOAD Jan 14 06:37:02.701000 audit: BPF prog-id=107 op=LOAD Jan 14 06:37:02.701000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.701000 audit: BPF prog-id=107 op=UNLOAD Jan 14 06:37:02.701000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.702000 audit: BPF prog-id=108 op=LOAD Jan 14 06:37:02.703000 audit: BPF prog-id=109 op=LOAD Jan 14 06:37:02.702000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.702000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.704000 audit: BPF prog-id=110 op=LOAD Jan 14 06:37:02.704000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.704000 audit: BPF prog-id=110 op=UNLOAD Jan 14 06:37:02.704000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.704000 audit: BPF prog-id=108 op=UNLOAD Jan 14 06:37:02.704000 audit[2767]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.704000 audit: BPF prog-id=111 op=LOAD Jan 14 06:37:02.704000 audit[2767]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2641 pid=2767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132346438643038373063656335373766623866353335356463636333 Jan 14 06:37:02.711000 audit: BPF prog-id=112 op=LOAD Jan 14 06:37:02.711000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.712000 audit: BPF prog-id=112 op=UNLOAD Jan 14 06:37:02.712000 audit[2766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.715000 audit: BPF prog-id=113 op=LOAD Jan 14 06:37:02.715000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.715000 audit: BPF prog-id=114 op=LOAD Jan 14 06:37:02.715000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.719000 audit: BPF prog-id=114 op=UNLOAD Jan 14 06:37:02.719000 audit[2766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.719000 audit: BPF prog-id=113 op=UNLOAD Jan 14 06:37:02.719000 audit[2766]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.719000 audit: BPF prog-id=115 op=LOAD Jan 14 06:37:02.719000 audit[2766]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2644 pid=2766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:02.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323163346366343231343339373163376166636336383733653538 Jan 14 06:37:02.746080 containerd[1642]: time="2026-01-14T06:37:02.746021171Z" level=info msg="StartContainer for \"5ef1bc1cac1d9afdae43a0b43611db3787c5e1dbb678b7a494d14970bd6940c4\" returns successfully" Jan 14 06:37:02.818867 containerd[1642]: time="2026-01-14T06:37:02.818796358Z" level=info msg="StartContainer for \"a24d8d0870cec577fb8f5355dccc34a92ba92e662d26925c55c919b7767667c1\" returns successfully" Jan 14 06:37:02.826957 containerd[1642]: time="2026-01-14T06:37:02.826749384Z" level=info msg="StartContainer for \"af21c4cf42143971c7afcc6873e58571f646f866cdc3bf7581c91ec469025927\" returns successfully" Jan 14 06:37:02.842095 kubelet[2584]: W0114 06:37:02.841935 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.41.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2u6n8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:02.842594 kubelet[2584]: E0114 06:37:02.842460 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.41.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-2u6n8.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:02.957121 kubelet[2584]: E0114 06:37:02.956860 2584 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.41.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-2u6n8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.41.14:6443: connect: connection refused" interval="1.6s" Jan 14 06:37:03.151575 kubelet[2584]: W0114 06:37:03.151306 2584 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.41.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.41.14:6443: connect: connection refused Jan 14 06:37:03.151575 kubelet[2584]: E0114 06:37:03.151424 2584 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.41.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.41.14:6443: connect: connection refused" logger="UnhandledError" Jan 14 06:37:03.168572 kubelet[2584]: I0114 06:37:03.168524 2584 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:03.169078 kubelet[2584]: E0114 06:37:03.169043 2584 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.41.14:6443/api/v1/nodes\": dial tcp 10.230.41.14:6443: connect: connection refused" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:03.636207 kubelet[2584]: E0114 06:37:03.636153 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:03.641010 kubelet[2584]: E0114 06:37:03.640748 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:03.647172 kubelet[2584]: E0114 06:37:03.647139 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:04.657298 kubelet[2584]: E0114 06:37:04.654954 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:04.659069 kubelet[2584]: E0114 06:37:04.658761 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:04.659609 kubelet[2584]: E0114 06:37:04.659434 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:04.774029 kubelet[2584]: I0114 06:37:04.773981 2584 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.653936 kubelet[2584]: E0114 06:37:05.653421 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.654298 kubelet[2584]: E0114 06:37:05.654205 2584 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.850901 kubelet[2584]: E0114 06:37:05.850836 2584 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-2u6n8.gb1.brightbox.com\" not found" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.856968 kubelet[2584]: E0114 06:37:05.856497 2584 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-2u6n8.gb1.brightbox.com.188a8587afa14fc9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-2u6n8.gb1.brightbox.com,UID:srv-2u6n8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-2u6n8.gb1.brightbox.com,},FirstTimestamp:2026-01-14 06:37:01.510004681 +0000 UTC m=+0.733159962,LastTimestamp:2026-01-14 06:37:01.510004681 +0000 UTC m=+0.733159962,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-2u6n8.gb1.brightbox.com,}" Jan 14 06:37:05.908699 kubelet[2584]: I0114 06:37:05.907917 2584 kubelet_node_status.go:78] "Successfully registered node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.924761 kubelet[2584]: E0114 06:37:05.924637 2584 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-2u6n8.gb1.brightbox.com.188a8587b3ee2605 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-2u6n8.gb1.brightbox.com,UID:srv-2u6n8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-2u6n8.gb1.brightbox.com,},FirstTimestamp:2026-01-14 06:37:01.582149125 +0000 UTC m=+0.805304420,LastTimestamp:2026-01-14 06:37:01.582149125 +0000 UTC m=+0.805304420,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-2u6n8.gb1.brightbox.com,}" Jan 14 06:37:05.950436 kubelet[2584]: I0114 06:37:05.950091 2584 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.985641 kubelet[2584]: E0114 06:37:05.985251 2584 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.985641 kubelet[2584]: I0114 06:37:05.985313 2584 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.989917 kubelet[2584]: E0114 06:37:05.989890 2584 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.990252 kubelet[2584]: I0114 06:37:05.990097 2584 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:05.993290 kubelet[2584]: E0114 06:37:05.993243 2584 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-2u6n8.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:06.500695 kubelet[2584]: I0114 06:37:06.500589 2584 apiserver.go:52] "Watching apiserver" Jan 14 06:37:06.555118 kubelet[2584]: I0114 06:37:06.554996 2584 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:37:08.277070 systemd[1]: Reload requested from client PID 2854 ('systemctl') (unit session-12.scope)... Jan 14 06:37:08.277102 systemd[1]: Reloading... Jan 14 06:37:08.492328 zram_generator::config[2898]: No configuration found. Jan 14 06:37:08.910333 systemd[1]: Reloading finished in 632 ms. Jan 14 06:37:08.945068 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:37:08.963983 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 06:37:08.964612 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:37:08.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:08.966532 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 06:37:08.966618 kernel: audit: type=1131 audit(1768372628.963:398): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:08.971168 systemd[1]: kubelet.service: Consumed 1.427s CPU time, 129.2M memory peak. Jan 14 06:37:08.976000 audit: BPF prog-id=116 op=LOAD Jan 14 06:37:08.976267 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:37:08.979333 kernel: audit: type=1334 audit(1768372628.976:399): prog-id=116 op=LOAD Jan 14 06:37:08.976000 audit: BPF prog-id=85 op=UNLOAD Jan 14 06:37:08.982341 kernel: audit: type=1334 audit(1768372628.976:400): prog-id=85 op=UNLOAD Jan 14 06:37:08.986070 kernel: audit: type=1334 audit(1768372628.978:401): prog-id=117 op=LOAD Jan 14 06:37:08.986148 kernel: audit: type=1334 audit(1768372628.978:402): prog-id=71 op=UNLOAD Jan 14 06:37:08.978000 audit: BPF prog-id=117 op=LOAD Jan 14 06:37:08.978000 audit: BPF prog-id=71 op=UNLOAD Jan 14 06:37:08.979000 audit: BPF prog-id=118 op=LOAD Jan 14 06:37:08.993036 kernel: audit: type=1334 audit(1768372628.979:403): prog-id=118 op=LOAD Jan 14 06:37:08.993106 kernel: audit: type=1334 audit(1768372628.979:404): prog-id=72 op=UNLOAD Jan 14 06:37:08.979000 audit: BPF prog-id=72 op=UNLOAD Jan 14 06:37:08.994604 kernel: audit: type=1334 audit(1768372628.980:405): prog-id=119 op=LOAD Jan 14 06:37:08.980000 audit: BPF prog-id=119 op=LOAD Jan 14 06:37:08.980000 audit: BPF prog-id=82 op=UNLOAD Jan 14 06:37:08.996208 kernel: audit: type=1334 audit(1768372628.980:406): prog-id=82 op=UNLOAD Jan 14 06:37:08.996264 kernel: audit: type=1334 audit(1768372628.980:407): prog-id=120 op=LOAD Jan 14 06:37:08.980000 audit: BPF prog-id=120 op=LOAD Jan 14 06:37:08.980000 audit: BPF prog-id=121 op=LOAD Jan 14 06:37:08.981000 audit: BPF prog-id=83 op=UNLOAD Jan 14 06:37:08.981000 audit: BPF prog-id=84 op=UNLOAD Jan 14 06:37:08.982000 audit: BPF prog-id=122 op=LOAD Jan 14 06:37:08.982000 audit: BPF prog-id=68 op=UNLOAD Jan 14 06:37:08.983000 audit: BPF prog-id=123 op=LOAD Jan 14 06:37:08.983000 audit: BPF prog-id=124 op=LOAD Jan 14 06:37:08.983000 audit: BPF prog-id=69 op=UNLOAD Jan 14 06:37:08.983000 audit: BPF prog-id=70 op=UNLOAD Jan 14 06:37:08.984000 audit: BPF prog-id=125 op=LOAD Jan 14 06:37:08.984000 audit: BPF prog-id=73 op=UNLOAD Jan 14 06:37:08.984000 audit: BPF prog-id=126 op=LOAD Jan 14 06:37:08.984000 audit: BPF prog-id=127 op=LOAD Jan 14 06:37:08.984000 audit: BPF prog-id=74 op=UNLOAD Jan 14 06:37:08.984000 audit: BPF prog-id=75 op=UNLOAD Jan 14 06:37:08.987000 audit: BPF prog-id=128 op=LOAD Jan 14 06:37:08.987000 audit: BPF prog-id=65 op=UNLOAD Jan 14 06:37:08.988000 audit: BPF prog-id=129 op=LOAD Jan 14 06:37:08.988000 audit: BPF prog-id=76 op=UNLOAD Jan 14 06:37:08.988000 audit: BPF prog-id=130 op=LOAD Jan 14 06:37:08.988000 audit: BPF prog-id=131 op=LOAD Jan 14 06:37:08.988000 audit: BPF prog-id=77 op=UNLOAD Jan 14 06:37:08.988000 audit: BPF prog-id=78 op=UNLOAD Jan 14 06:37:08.991000 audit: BPF prog-id=132 op=LOAD Jan 14 06:37:08.996000 audit: BPF prog-id=79 op=UNLOAD Jan 14 06:37:08.996000 audit: BPF prog-id=133 op=LOAD Jan 14 06:37:08.996000 audit: BPF prog-id=134 op=LOAD Jan 14 06:37:08.996000 audit: BPF prog-id=80 op=UNLOAD Jan 14 06:37:08.996000 audit: BPF prog-id=81 op=UNLOAD Jan 14 06:37:08.997000 audit: BPF prog-id=135 op=LOAD Jan 14 06:37:08.997000 audit: BPF prog-id=136 op=LOAD Jan 14 06:37:08.999000 audit: BPF prog-id=66 op=UNLOAD Jan 14 06:37:08.999000 audit: BPF prog-id=67 op=UNLOAD Jan 14 06:37:09.351670 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:37:09.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:09.365308 (kubelet)[2966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:37:09.485476 kubelet[2966]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:37:09.485476 kubelet[2966]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:37:09.485476 kubelet[2966]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:37:09.485476 kubelet[2966]: I0114 06:37:09.485356 2966 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:37:09.497849 kubelet[2966]: I0114 06:37:09.497581 2966 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 06:37:09.497849 kubelet[2966]: I0114 06:37:09.497616 2966 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:37:09.499513 kubelet[2966]: I0114 06:37:09.498982 2966 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 06:37:09.507090 kubelet[2966]: I0114 06:37:09.506944 2966 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 06:37:09.522928 kubelet[2966]: I0114 06:37:09.522828 2966 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:37:09.537366 kubelet[2966]: I0114 06:37:09.536479 2966 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:37:09.545642 kubelet[2966]: I0114 06:37:09.545454 2966 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:37:09.549874 kubelet[2966]: I0114 06:37:09.549296 2966 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:37:09.549874 kubelet[2966]: I0114 06:37:09.549365 2966 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-2u6n8.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:37:09.550813 kubelet[2966]: I0114 06:37:09.550211 2966 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:37:09.550813 kubelet[2966]: I0114 06:37:09.550241 2966 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 06:37:09.552296 kubelet[2966]: I0114 06:37:09.551516 2966 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:37:09.552296 kubelet[2966]: I0114 06:37:09.551798 2966 kubelet.go:446] "Attempting to sync node with API server" Jan 14 06:37:09.554142 kubelet[2966]: I0114 06:37:09.554110 2966 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:37:09.554215 kubelet[2966]: I0114 06:37:09.554173 2966 kubelet.go:352] "Adding apiserver pod source" Jan 14 06:37:09.554215 kubelet[2966]: I0114 06:37:09.554193 2966 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:37:09.562208 kubelet[2966]: I0114 06:37:09.562179 2966 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:37:09.565237 kubelet[2966]: I0114 06:37:09.564026 2966 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 06:37:09.577934 kubelet[2966]: I0114 06:37:09.577686 2966 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:37:09.577934 kubelet[2966]: I0114 06:37:09.577779 2966 server.go:1287] "Started kubelet" Jan 14 06:37:09.585688 kubelet[2966]: I0114 06:37:09.585352 2966 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:37:09.592472 kubelet[2966]: I0114 06:37:09.591412 2966 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:37:09.598301 kubelet[2966]: I0114 06:37:09.596466 2966 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:37:09.598301 kubelet[2966]: I0114 06:37:09.597368 2966 server.go:479] "Adding debug handlers to kubelet server" Jan 14 06:37:09.611157 kubelet[2966]: I0114 06:37:09.608689 2966 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:37:09.611157 kubelet[2966]: I0114 06:37:09.609597 2966 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:37:09.613261 kubelet[2966]: I0114 06:37:09.611139 2966 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:37:09.616033 kubelet[2966]: I0114 06:37:09.615721 2966 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:37:09.616250 kubelet[2966]: I0114 06:37:09.616225 2966 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:37:09.628350 kubelet[2966]: E0114 06:37:09.626809 2966 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:37:09.629993 kubelet[2966]: I0114 06:37:09.629964 2966 factory.go:221] Registration of the systemd container factory successfully Jan 14 06:37:09.630152 kubelet[2966]: I0114 06:37:09.630119 2966 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:37:09.637935 kubelet[2966]: I0114 06:37:09.636868 2966 factory.go:221] Registration of the containerd container factory successfully Jan 14 06:37:09.659205 kubelet[2966]: I0114 06:37:09.659146 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 06:37:09.663791 kubelet[2966]: I0114 06:37:09.663335 2966 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 06:37:09.663791 kubelet[2966]: I0114 06:37:09.663375 2966 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 06:37:09.663791 kubelet[2966]: I0114 06:37:09.663408 2966 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:37:09.663791 kubelet[2966]: I0114 06:37:09.663421 2966 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 06:37:09.663791 kubelet[2966]: E0114 06:37:09.663491 2966 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:37:09.770747 kubelet[2966]: E0114 06:37:09.770419 2966 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 06:37:09.802730 kubelet[2966]: I0114 06:37:09.802671 2966 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:37:09.802730 kubelet[2966]: I0114 06:37:09.802712 2966 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:37:09.802730 kubelet[2966]: I0114 06:37:09.802752 2966 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803043 2966 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803063 2966 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803113 2966 policy_none.go:49] "None policy: Start" Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803145 2966 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803174 2966 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:37:09.804236 kubelet[2966]: I0114 06:37:09.803546 2966 state_mem.go:75] "Updated machine memory state" Jan 14 06:37:09.823717 kubelet[2966]: I0114 06:37:09.821493 2966 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 06:37:09.823717 kubelet[2966]: I0114 06:37:09.821803 2966 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:37:09.823717 kubelet[2966]: I0114 06:37:09.821821 2966 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:37:09.823717 kubelet[2966]: I0114 06:37:09.823080 2966 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:37:09.835326 kubelet[2966]: E0114 06:37:09.834793 2966 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:37:09.952489 kubelet[2966]: I0114 06:37:09.951853 2966 kubelet_node_status.go:75] "Attempting to register node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.966934 kubelet[2966]: I0114 06:37:09.966649 2966 kubelet_node_status.go:124] "Node was previously registered" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.967663 kubelet[2966]: I0114 06:37:09.967644 2966 kubelet_node_status.go:78] "Successfully registered node" node="srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.972152 kubelet[2966]: I0114 06:37:09.971959 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.977503 kubelet[2966]: I0114 06:37:09.977460 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.979219 kubelet[2966]: I0114 06:37:09.978065 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:09.996912 kubelet[2966]: W0114 06:37:09.996533 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 06:37:09.998898 kubelet[2966]: W0114 06:37:09.998870 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 06:37:10.004318 kubelet[2966]: W0114 06:37:10.004244 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 06:37:10.020566 kubelet[2966]: I0114 06:37:10.020506 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-ca-certs\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020566 kubelet[2966]: I0114 06:37:10.020565 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-k8s-certs\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020814 kubelet[2966]: I0114 06:37:10.020595 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/093e3e0f4d06660d5eb6b129b8638b40-usr-share-ca-certificates\") pod \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" (UID: \"093e3e0f4d06660d5eb6b129b8638b40\") " pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020814 kubelet[2966]: I0114 06:37:10.020629 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020814 kubelet[2966]: I0114 06:37:10.020660 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/105be9a305f327d3eb095dc7ff144413-kubeconfig\") pod \"kube-scheduler-srv-2u6n8.gb1.brightbox.com\" (UID: \"105be9a305f327d3eb095dc7ff144413\") " pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020814 kubelet[2966]: I0114 06:37:10.020684 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-ca-certs\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.020814 kubelet[2966]: I0114 06:37:10.020714 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-flexvolume-dir\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.021078 kubelet[2966]: I0114 06:37:10.020762 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-k8s-certs\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.021078 kubelet[2966]: I0114 06:37:10.020794 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/62753937c92d99b736b4039cd801fd05-kubeconfig\") pod \"kube-controller-manager-srv-2u6n8.gb1.brightbox.com\" (UID: \"62753937c92d99b736b4039cd801fd05\") " pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.556448 kubelet[2966]: I0114 06:37:10.556044 2966 apiserver.go:52] "Watching apiserver" Jan 14 06:37:10.617012 kubelet[2966]: I0114 06:37:10.616898 2966 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:37:10.741819 kubelet[2966]: I0114 06:37:10.741739 2966 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.759605 kubelet[2966]: W0114 06:37:10.759568 2966 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 06:37:10.759815 kubelet[2966]: E0114 06:37:10.759666 2966 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-2u6n8.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" Jan 14 06:37:10.790228 kubelet[2966]: I0114 06:37:10.789754 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-2u6n8.gb1.brightbox.com" podStartSLOduration=1.789558828 podStartE2EDuration="1.789558828s" podCreationTimestamp="2026-01-14 06:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:37:10.789484255 +0000 UTC m=+1.394523367" watchObservedRunningTime="2026-01-14 06:37:10.789558828 +0000 UTC m=+1.394597909" Jan 14 06:37:10.824901 kubelet[2966]: I0114 06:37:10.824140 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-2u6n8.gb1.brightbox.com" podStartSLOduration=1.824104961 podStartE2EDuration="1.824104961s" podCreationTimestamp="2026-01-14 06:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:37:10.822732557 +0000 UTC m=+1.427771666" watchObservedRunningTime="2026-01-14 06:37:10.824104961 +0000 UTC m=+1.429144046" Jan 14 06:37:10.941865 kubelet[2966]: I0114 06:37:10.941774 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-2u6n8.gb1.brightbox.com" podStartSLOduration=1.9417511109999999 podStartE2EDuration="1.941751111s" podCreationTimestamp="2026-01-14 06:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:37:10.883117083 +0000 UTC m=+1.488156177" watchObservedRunningTime="2026-01-14 06:37:10.941751111 +0000 UTC m=+1.546790205" Jan 14 06:37:12.968257 kubelet[2966]: I0114 06:37:12.968159 2966 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 06:37:12.969979 containerd[1642]: time="2026-01-14T06:37:12.969762195Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 06:37:12.971183 kubelet[2966]: I0114 06:37:12.970564 2966 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 06:37:13.712586 systemd[1]: Created slice kubepods-besteffort-podf530a086_42c1_4d57_8eef_05dc30785496.slice - libcontainer container kubepods-besteffort-podf530a086_42c1_4d57_8eef_05dc30785496.slice. Jan 14 06:37:13.744350 kubelet[2966]: I0114 06:37:13.744291 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f530a086-42c1-4d57-8eef-05dc30785496-xtables-lock\") pod \"kube-proxy-fgx7g\" (UID: \"f530a086-42c1-4d57-8eef-05dc30785496\") " pod="kube-system/kube-proxy-fgx7g" Jan 14 06:37:13.744350 kubelet[2966]: I0114 06:37:13.744349 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f530a086-42c1-4d57-8eef-05dc30785496-kube-proxy\") pod \"kube-proxy-fgx7g\" (UID: \"f530a086-42c1-4d57-8eef-05dc30785496\") " pod="kube-system/kube-proxy-fgx7g" Jan 14 06:37:13.744725 kubelet[2966]: I0114 06:37:13.744385 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f530a086-42c1-4d57-8eef-05dc30785496-lib-modules\") pod \"kube-proxy-fgx7g\" (UID: \"f530a086-42c1-4d57-8eef-05dc30785496\") " pod="kube-system/kube-proxy-fgx7g" Jan 14 06:37:13.744725 kubelet[2966]: I0114 06:37:13.744410 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlzf4\" (UniqueName: \"kubernetes.io/projected/f530a086-42c1-4d57-8eef-05dc30785496-kube-api-access-wlzf4\") pod \"kube-proxy-fgx7g\" (UID: \"f530a086-42c1-4d57-8eef-05dc30785496\") " pod="kube-system/kube-proxy-fgx7g" Jan 14 06:37:14.026808 containerd[1642]: time="2026-01-14T06:37:14.026611119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgx7g,Uid:f530a086-42c1-4d57-8eef-05dc30785496,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:14.099323 containerd[1642]: time="2026-01-14T06:37:14.098236816Z" level=info msg="connecting to shim d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2" address="unix:///run/containerd/s/00d7edcdda0ce2b9356deac971bc2982d180f7b53cf95c9b9491371bc24f1d3d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:14.193597 systemd[1]: Started cri-containerd-d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2.scope - libcontainer container d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2. Jan 14 06:37:14.211404 systemd[1]: Created slice kubepods-besteffort-pod3d66535c_41f9_4d11_afb4_4e76e9c7247c.slice - libcontainer container kubepods-besteffort-pod3d66535c_41f9_4d11_afb4_4e76e9c7247c.slice. Jan 14 06:37:14.247984 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 06:37:14.248219 kernel: audit: type=1334 audit(1768372634.238:442): prog-id=137 op=LOAD Jan 14 06:37:14.238000 audit: BPF prog-id=137 op=LOAD Jan 14 06:37:14.248000 audit: BPF prog-id=138 op=LOAD Jan 14 06:37:14.250545 kubelet[2966]: I0114 06:37:14.250461 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d66535c-41f9-4d11-afb4-4e76e9c7247c-var-lib-calico\") pod \"tigera-operator-7dcd859c48-g8jhv\" (UID: \"3d66535c-41f9-4d11-afb4-4e76e9c7247c\") " pod="tigera-operator/tigera-operator-7dcd859c48-g8jhv" Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.251749 kubelet[2966]: I0114 06:37:14.251678 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6ns5\" (UniqueName: \"kubernetes.io/projected/3d66535c-41f9-4d11-afb4-4e76e9c7247c-kube-api-access-h6ns5\") pod \"tigera-operator-7dcd859c48-g8jhv\" (UID: \"3d66535c-41f9-4d11-afb4-4e76e9c7247c\") " pod="tigera-operator/tigera-operator-7dcd859c48-g8jhv" Jan 14 06:37:14.253397 kernel: audit: type=1334 audit(1768372634.248:443): prog-id=138 op=LOAD Jan 14 06:37:14.253489 kernel: audit: type=1300 audit(1768372634.248:443): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.257944 kernel: audit: type=1327 audit(1768372634.248:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: BPF prog-id=138 op=UNLOAD Jan 14 06:37:14.261727 kernel: audit: type=1334 audit(1768372634.248:444): prog-id=138 op=UNLOAD Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.264294 kernel: audit: type=1300 audit(1768372634.248:444): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.269224 kernel: audit: type=1327 audit(1768372634.248:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: BPF prog-id=139 op=LOAD Jan 14 06:37:14.275299 kernel: audit: type=1334 audit(1768372634.248:445): prog-id=139 op=LOAD Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.283394 kernel: audit: type=1300 audit(1768372634.248:445): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: BPF prog-id=140 op=LOAD Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: BPF prog-id=140 op=UNLOAD Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.289406 kernel: audit: type=1327 audit(1768372634.248:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.248000 audit: BPF prog-id=139 op=UNLOAD Jan 14 06:37:14.248000 audit[3034]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.248000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.252000 audit: BPF prog-id=141 op=LOAD Jan 14 06:37:14.252000 audit[3034]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3022 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.252000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433633763353265356437643936363864383463646336363264353363 Jan 14 06:37:14.303995 containerd[1642]: time="2026-01-14T06:37:14.303897302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fgx7g,Uid:f530a086-42c1-4d57-8eef-05dc30785496,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2\"" Jan 14 06:37:14.310775 containerd[1642]: time="2026-01-14T06:37:14.310716573Z" level=info msg="CreateContainer within sandbox \"d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 06:37:14.328365 containerd[1642]: time="2026-01-14T06:37:14.328310165Z" level=info msg="Container 2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:14.333565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2137421367.mount: Deactivated successfully. Jan 14 06:37:14.346029 containerd[1642]: time="2026-01-14T06:37:14.345977739Z" level=info msg="CreateContainer within sandbox \"d3c7c52e5d7d9668d84cdc662d53c8fcabca80407fe3ea65b343333cea7cf7c2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5\"" Jan 14 06:37:14.347957 containerd[1642]: time="2026-01-14T06:37:14.347830916Z" level=info msg="StartContainer for \"2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5\"" Jan 14 06:37:14.351156 containerd[1642]: time="2026-01-14T06:37:14.351124962Z" level=info msg="connecting to shim 2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5" address="unix:///run/containerd/s/00d7edcdda0ce2b9356deac971bc2982d180f7b53cf95c9b9491371bc24f1d3d" protocol=ttrpc version=3 Jan 14 06:37:14.403610 systemd[1]: Started cri-containerd-2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5.scope - libcontainer container 2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5. Jan 14 06:37:14.502000 audit: BPF prog-id=142 op=LOAD Jan 14 06:37:14.502000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643334656234353936353836393232353637356237313466396463 Jan 14 06:37:14.503000 audit: BPF prog-id=143 op=LOAD Jan 14 06:37:14.503000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643334656234353936353836393232353637356237313466396463 Jan 14 06:37:14.503000 audit: BPF prog-id=143 op=UNLOAD Jan 14 06:37:14.503000 audit[3060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643334656234353936353836393232353637356237313466396463 Jan 14 06:37:14.503000 audit: BPF prog-id=142 op=UNLOAD Jan 14 06:37:14.503000 audit[3060]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643334656234353936353836393232353637356237313466396463 Jan 14 06:37:14.503000 audit: BPF prog-id=144 op=LOAD Jan 14 06:37:14.503000 audit[3060]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3022 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.503000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266643334656234353936353836393232353637356237313466396463 Jan 14 06:37:14.519677 containerd[1642]: time="2026-01-14T06:37:14.519616114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g8jhv,Uid:3d66535c-41f9-4d11-afb4-4e76e9c7247c,Namespace:tigera-operator,Attempt:0,}" Jan 14 06:37:14.553811 containerd[1642]: time="2026-01-14T06:37:14.552532614Z" level=info msg="connecting to shim 8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51" address="unix:///run/containerd/s/ce51acf4354c753573965a1f64d12f04841279d821b022b08a71628243b57652" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:14.572118 containerd[1642]: time="2026-01-14T06:37:14.571871248Z" level=info msg="StartContainer for \"2fd34eb45965869225675b714f9dc14f7195f43921fd2d6b2dd6c141d06530c5\" returns successfully" Jan 14 06:37:14.621618 systemd[1]: Started cri-containerd-8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51.scope - libcontainer container 8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51. Jan 14 06:37:14.662000 audit: BPF prog-id=145 op=LOAD Jan 14 06:37:14.663000 audit: BPF prog-id=146 op=LOAD Jan 14 06:37:14.663000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.663000 audit: BPF prog-id=146 op=UNLOAD Jan 14 06:37:14.663000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.663000 audit: BPF prog-id=147 op=LOAD Jan 14 06:37:14.663000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.664000 audit: BPF prog-id=148 op=LOAD Jan 14 06:37:14.664000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.664000 audit: BPF prog-id=148 op=UNLOAD Jan 14 06:37:14.664000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.664000 audit: BPF prog-id=147 op=UNLOAD Jan 14 06:37:14.664000 audit[3108]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.664000 audit: BPF prog-id=149 op=LOAD Jan 14 06:37:14.664000 audit[3108]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3097 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:14.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833353964303339306565663330623637356133656336633335383033 Jan 14 06:37:14.755591 containerd[1642]: time="2026-01-14T06:37:14.755376602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-g8jhv,Uid:3d66535c-41f9-4d11-afb4-4e76e9c7247c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51\"" Jan 14 06:37:14.763783 containerd[1642]: time="2026-01-14T06:37:14.763695331Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 06:37:14.787721 kubelet[2966]: I0114 06:37:14.787193 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fgx7g" podStartSLOduration=1.787163676 podStartE2EDuration="1.787163676s" podCreationTimestamp="2026-01-14 06:37:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:37:14.785846935 +0000 UTC m=+5.390886047" watchObservedRunningTime="2026-01-14 06:37:14.787163676 +0000 UTC m=+5.392202770" Jan 14 06:37:15.129000 audit[3173]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.129000 audit[3173]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9120da30 a2=0 a3=7ffd9120da1c items=0 ppid=3073 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:37:15.132000 audit[3174]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.132000 audit[3174]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffea7cb6e60 a2=0 a3=7ffea7cb6e4c items=0 ppid=3073 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:37:15.136000 audit[3175]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.136000 audit[3175]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf57391e0 a2=0 a3=7ffcf57391cc items=0 ppid=3073 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:37:15.137000 audit[3176]: NETFILTER_CFG table=mangle:57 family=10 entries=1 op=nft_register_chain pid=3176 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.137000 audit[3176]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa80f14c0 a2=0 a3=7fffa80f14ac items=0 ppid=3073 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:37:15.140000 audit[3178]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.140000 audit[3178]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff64b9e200 a2=0 a3=7fff64b9e1ec items=0 ppid=3073 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:37:15.142000 audit[3179]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.142000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7c8655f0 a2=0 a3=7fff7c8655dc items=0 ppid=3073 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:37:15.247000 audit[3180]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.247000 audit[3180]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffed2927580 a2=0 a3=7ffed292756c items=0 ppid=3073 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.247000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:37:15.266000 audit[3182]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.266000 audit[3182]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff89203f50 a2=0 a3=7fff89203f3c items=0 ppid=3073 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.266000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 06:37:15.272000 audit[3185]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.272000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd30badca0 a2=0 a3=7ffd30badc8c items=0 ppid=3073 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.272000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 06:37:15.274000 audit[3186]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.274000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdfa329900 a2=0 a3=7ffdfa3298ec items=0 ppid=3073 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.274000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:37:15.279000 audit[3188]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.279000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc963c4130 a2=0 a3=7ffc963c411c items=0 ppid=3073 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:37:15.281000 audit[3189]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.281000 audit[3189]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf4832380 a2=0 a3=7ffcf483236c items=0 ppid=3073 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.281000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:37:15.285000 audit[3191]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.285000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe0133d870 a2=0 a3=7ffe0133d85c items=0 ppid=3073 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.285000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:37:15.291000 audit[3194]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.291000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcfa16f470 a2=0 a3=7ffcfa16f45c items=0 ppid=3073 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.291000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 06:37:15.292000 audit[3195]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.292000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffafe4aa80 a2=0 a3=7fffafe4aa6c items=0 ppid=3073 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.292000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:37:15.297000 audit[3197]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.297000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd19dfb660 a2=0 a3=7ffd19dfb64c items=0 ppid=3073 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:37:15.299000 audit[3198]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.299000 audit[3198]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc22b12180 a2=0 a3=7ffc22b1216c items=0 ppid=3073 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.299000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:37:15.303000 audit[3200]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.303000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffacb832e0 a2=0 a3=7fffacb832cc items=0 ppid=3073 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:37:15.309000 audit[3203]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.309000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd37f4f660 a2=0 a3=7ffd37f4f64c items=0 ppid=3073 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.309000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:37:15.315000 audit[3206]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.315000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe94ff1a80 a2=0 a3=7ffe94ff1a6c items=0 ppid=3073 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:37:15.318000 audit[3207]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.318000 audit[3207]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffa9a951b0 a2=0 a3=7fffa9a9519c items=0 ppid=3073 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.318000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:37:15.322000 audit[3209]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.322000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe31bf4f10 a2=0 a3=7ffe31bf4efc items=0 ppid=3073 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.322000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:37:15.328000 audit[3212]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.328000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe79b49100 a2=0 a3=7ffe79b490ec items=0 ppid=3073 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:37:15.330000 audit[3213]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.330000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff9ef064a0 a2=0 a3=7fff9ef0648c items=0 ppid=3073 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.330000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:37:15.334000 audit[3215]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:37:15.334000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff8980cfe0 a2=0 a3=7fff8980cfcc items=0 ppid=3073 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:37:15.369000 audit[3221]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:15.369000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd524ed900 a2=0 a3=7ffd524ed8ec items=0 ppid=3073 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:15.379000 audit[3221]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:15.379000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd524ed900 a2=0 a3=7ffd524ed8ec items=0 ppid=3073 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:15.383000 audit[3226]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.383000 audit[3226]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff70b8d9b0 a2=0 a3=7fff70b8d99c items=0 ppid=3073 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:37:15.387000 audit[3228]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.387000 audit[3228]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe856f8a30 a2=0 a3=7ffe856f8a1c items=0 ppid=3073 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.387000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 06:37:15.393000 audit[3231]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3231 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.393000 audit[3231]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd45af46d0 a2=0 a3=7ffd45af46bc items=0 ppid=3073 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.393000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 06:37:15.395000 audit[3232]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.395000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffddd22ec60 a2=0 a3=7ffddd22ec4c items=0 ppid=3073 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.395000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:37:15.399000 audit[3234]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.399000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd634b8c70 a2=0 a3=7ffd634b8c5c items=0 ppid=3073 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:37:15.401000 audit[3235]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.401000 audit[3235]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1ea29000 a2=0 a3=7fff1ea28fec items=0 ppid=3073 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.401000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:37:15.405000 audit[3237]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.405000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff3add2c30 a2=0 a3=7fff3add2c1c items=0 ppid=3073 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.405000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 06:37:15.412000 audit[3240]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.412000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd3cf9e190 a2=0 a3=7ffd3cf9e17c items=0 ppid=3073 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.412000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:37:15.414000 audit[3241]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.414000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6c0f18e0 a2=0 a3=7ffe6c0f18cc items=0 ppid=3073 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:37:15.418000 audit[3243]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.418000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc799d4ba0 a2=0 a3=7ffc799d4b8c items=0 ppid=3073 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.418000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:37:15.420000 audit[3244]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.420000 audit[3244]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc973ea590 a2=0 a3=7ffc973ea57c items=0 ppid=3073 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.420000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:37:15.425000 audit[3246]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.425000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe87aafc10 a2=0 a3=7ffe87aafbfc items=0 ppid=3073 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.425000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:37:15.431000 audit[3249]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.431000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff78259cd0 a2=0 a3=7fff78259cbc items=0 ppid=3073 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.431000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:37:15.437000 audit[3252]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.437000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc339faed0 a2=0 a3=7ffc339faebc items=0 ppid=3073 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.437000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 06:37:15.439000 audit[3253]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.439000 audit[3253]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe13d09890 a2=0 a3=7ffe13d0987c items=0 ppid=3073 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:37:15.444000 audit[3255]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.444000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd1ae00df0 a2=0 a3=7ffd1ae00ddc items=0 ppid=3073 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:37:15.451000 audit[3258]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.451000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff0e22e70 a2=0 a3=7ffff0e22e5c items=0 ppid=3073 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:37:15.453000 audit[3259]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.453000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff72b03f60 a2=0 a3=7fff72b03f4c items=0 ppid=3073 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:37:15.457000 audit[3261]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.457000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffe66070ac0 a2=0 a3=7ffe66070aac items=0 ppid=3073 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.457000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:37:15.459000 audit[3262]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3262 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.459000 audit[3262]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9886a580 a2=0 a3=7ffc9886a56c items=0 ppid=3073 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:37:15.463000 audit[3264]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.463000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff30fb41f0 a2=0 a3=7fff30fb41dc items=0 ppid=3073 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:37:15.469000 audit[3267]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:37:15.469000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe6e72d6e0 a2=0 a3=7ffe6e72d6cc items=0 ppid=3073 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:37:15.477000 audit[3269]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:37:15.477000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffefb91da70 a2=0 a3=7ffefb91da5c items=0 ppid=3073 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.477000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:15.478000 audit[3269]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:37:15.478000 audit[3269]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffefb91da70 a2=0 a3=7ffefb91da5c items=0 ppid=3073 pid=3269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:15.478000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:16.929528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3103405531.mount: Deactivated successfully. Jan 14 06:37:18.363993 containerd[1642]: time="2026-01-14T06:37:18.363815816Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:18.366058 containerd[1642]: time="2026-01-14T06:37:18.366012152Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 06:37:18.375477 containerd[1642]: time="2026-01-14T06:37:18.375434242Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:18.378667 containerd[1642]: time="2026-01-14T06:37:18.378593616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:18.380123 containerd[1642]: time="2026-01-14T06:37:18.379556750Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.615792436s" Jan 14 06:37:18.380123 containerd[1642]: time="2026-01-14T06:37:18.379648694Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 06:37:18.439605 containerd[1642]: time="2026-01-14T06:37:18.439458204Z" level=info msg="CreateContainer within sandbox \"8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 06:37:18.452170 containerd[1642]: time="2026-01-14T06:37:18.451459484Z" level=info msg="Container 718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:18.467478 containerd[1642]: time="2026-01-14T06:37:18.467343702Z" level=info msg="CreateContainer within sandbox \"8359d0390eef30b675a3ec6c358035bd653df786aaa58c2b598ab1b84bc3cf51\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948\"" Jan 14 06:37:18.468982 containerd[1642]: time="2026-01-14T06:37:18.468920832Z" level=info msg="StartContainer for \"718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948\"" Jan 14 06:37:18.472134 containerd[1642]: time="2026-01-14T06:37:18.472085561Z" level=info msg="connecting to shim 718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948" address="unix:///run/containerd/s/ce51acf4354c753573965a1f64d12f04841279d821b022b08a71628243b57652" protocol=ttrpc version=3 Jan 14 06:37:18.516552 systemd[1]: Started cri-containerd-718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948.scope - libcontainer container 718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948. Jan 14 06:37:18.539000 audit: BPF prog-id=150 op=LOAD Jan 14 06:37:18.539000 audit: BPF prog-id=151 op=LOAD Jan 14 06:37:18.539000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.540000 audit: BPF prog-id=151 op=UNLOAD Jan 14 06:37:18.540000 audit[3279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.540000 audit: BPF prog-id=152 op=LOAD Jan 14 06:37:18.540000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.541000 audit: BPF prog-id=153 op=LOAD Jan 14 06:37:18.541000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.541000 audit: BPF prog-id=153 op=UNLOAD Jan 14 06:37:18.541000 audit[3279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.541000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.542000 audit: BPF prog-id=152 op=UNLOAD Jan 14 06:37:18.542000 audit[3279]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.542000 audit: BPF prog-id=154 op=LOAD Jan 14 06:37:18.542000 audit[3279]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3097 pid=3279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:18.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731386462313363383530313535353564343335303635653635346633 Jan 14 06:37:18.592715 containerd[1642]: time="2026-01-14T06:37:18.592517444Z" level=info msg="StartContainer for \"718db13c85015555d435065e654f3f3fdd5ffab202abdf8b8e76ce3e1a83d948\" returns successfully" Jan 14 06:37:19.025375 systemd[1]: Started sshd@11-10.230.41.14:22-64.225.73.213:57568.service - OpenSSH per-connection server daemon (64.225.73.213:57568). Jan 14 06:37:19.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.41.14:22-64.225.73.213:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:19.375125 sshd[3312]: Invalid user postgres from 64.225.73.213 port 57568 Jan 14 06:37:19.461547 sshd[3312]: Connection closed by invalid user postgres 64.225.73.213 port 57568 [preauth] Jan 14 06:37:19.461000 audit[3312]: USER_ERR pid=3312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:37:19.470146 kernel: kauditd_printk_skb: 225 callbacks suppressed Jan 14 06:37:19.470331 kernel: audit: type=1109 audit(1768372639.461:523): pid=3312 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:37:19.478168 systemd[1]: sshd@11-10.230.41.14:22-64.225.73.213:57568.service: Deactivated successfully. Jan 14 06:37:19.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.41.14:22-64.225.73.213:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:19.484368 kernel: audit: type=1131 audit(1768372639.477:524): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.41.14:22-64.225.73.213:57568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:20.168878 kubelet[2966]: I0114 06:37:20.168745 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-g8jhv" podStartSLOduration=2.5468123289999998 podStartE2EDuration="6.168689302s" podCreationTimestamp="2026-01-14 06:37:14 +0000 UTC" firstStartedPulling="2026-01-14 06:37:14.758900709 +0000 UTC m=+5.363939782" lastFinishedPulling="2026-01-14 06:37:18.380777668 +0000 UTC m=+8.985816755" observedRunningTime="2026-01-14 06:37:18.808791824 +0000 UTC m=+9.413830918" watchObservedRunningTime="2026-01-14 06:37:20.168689302 +0000 UTC m=+10.773728398" Jan 14 06:37:27.363163 sudo[1952]: pam_unix(sudo:session): session closed for user root Jan 14 06:37:27.363000 audit[1952]: USER_END pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.377326 kernel: audit: type=1106 audit(1768372647.363:525): pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.363000 audit[1952]: CRED_DISP pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.389305 kernel: audit: type=1104 audit(1768372647.363:526): pid=1952 uid=500 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.483463 sshd[1951]: Connection closed by 20.161.92.111 port 34224 Jan 14 06:37:27.485057 sshd-session[1947]: pam_unix(sshd:session): session closed for user core Jan 14 06:37:27.493000 audit[1947]: USER_END pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:37:27.501512 kernel: audit: type=1106 audit(1768372647.493:527): pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:37:27.507435 systemd[1]: sshd@9-10.230.41.14:22-20.161.92.111:34224.service: Deactivated successfully. Jan 14 06:37:27.513314 kernel: audit: type=1104 audit(1768372647.493:528): pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:37:27.493000 audit[1947]: CRED_DISP pid=1947 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:37:27.515936 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 06:37:27.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.41.14:22-20.161.92.111:34224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.517342 systemd[1]: session-12.scope: Consumed 6.819s CPU time, 151M memory peak. Jan 14 06:37:27.522304 kernel: audit: type=1131 audit(1768372647.507:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.41.14:22-20.161.92.111:34224 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:27.525082 systemd-logind[1614]: Session 12 logged out. Waiting for processes to exit. Jan 14 06:37:27.530325 systemd-logind[1614]: Removed session 12. Jan 14 06:37:28.237000 audit[3366]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.237000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe4ea42d90 a2=0 a3=7ffe4ea42d7c items=0 ppid=3073 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.250904 kernel: audit: type=1325 audit(1768372648.237:530): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.251048 kernel: audit: type=1300 audit(1768372648.237:530): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe4ea42d90 a2=0 a3=7ffe4ea42d7c items=0 ppid=3073 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:28.269310 kernel: audit: type=1327 audit(1768372648.237:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:28.260000 audit[3366]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.277327 kernel: audit: type=1325 audit(1768372648.260:531): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3366 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.260000 audit[3366]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4ea42d90 a2=0 a3=0 items=0 ppid=3073 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.284303 kernel: audit: type=1300 audit(1768372648.260:531): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe4ea42d90 a2=0 a3=0 items=0 ppid=3073 pid=3366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:28.325000 audit[3368]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.325000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffeb8fd87e0 a2=0 a3=7ffeb8fd87cc items=0 ppid=3073 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:28.335000 audit[3368]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3368 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:28.335000 audit[3368]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffeb8fd87e0 a2=0 a3=0 items=0 ppid=3073 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:28.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:32.190000 audit[3370]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:32.190000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdd98aa170 a2=0 a3=7ffdd98aa15c items=0 ppid=3073 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:32.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:32.196000 audit[3370]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:32.196000 audit[3370]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd98aa170 a2=0 a3=0 items=0 ppid=3073 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:32.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:32.283000 audit[3373]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:32.283000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffdfe6d5ce0 a2=0 a3=7ffdfe6d5ccc items=0 ppid=3073 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:32.283000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:32.288000 audit[3373]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:32.288000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdfe6d5ce0 a2=0 a3=0 items=0 ppid=3073 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:32.288000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:33.311000 audit[3375]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:33.342362 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 06:37:33.342488 kernel: audit: type=1325 audit(1768372653.311:538): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:33.342566 kernel: audit: type=1300 audit(1768372653.311:538): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef78da5f0 a2=0 a3=7ffef78da5dc items=0 ppid=3073 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:33.342619 kernel: audit: type=1327 audit(1768372653.311:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:33.342713 kernel: audit: type=1325 audit(1768372653.337:539): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:33.311000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffef78da5f0 a2=0 a3=7ffef78da5dc items=0 ppid=3073 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:33.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:33.337000 audit[3375]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:33.337000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef78da5f0 a2=0 a3=0 items=0 ppid=3073 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:33.347911 kernel: audit: type=1300 audit(1768372653.337:539): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffef78da5f0 a2=0 a3=0 items=0 ppid=3073 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:33.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:33.352748 kernel: audit: type=1327 audit(1768372653.337:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:34.497087 kubelet[2966]: I0114 06:37:34.495613 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2a056331-a1f1-442d-a003-b5bb0d573018-typha-certs\") pod \"calico-typha-7d694b5fc4-hvbpt\" (UID: \"2a056331-a1f1-442d-a003-b5bb0d573018\") " pod="calico-system/calico-typha-7d694b5fc4-hvbpt" Jan 14 06:37:34.497087 kubelet[2966]: I0114 06:37:34.495890 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgf9\" (UniqueName: \"kubernetes.io/projected/2a056331-a1f1-442d-a003-b5bb0d573018-kube-api-access-jrgf9\") pod \"calico-typha-7d694b5fc4-hvbpt\" (UID: \"2a056331-a1f1-442d-a003-b5bb0d573018\") " pod="calico-system/calico-typha-7d694b5fc4-hvbpt" Jan 14 06:37:34.497087 kubelet[2966]: I0114 06:37:34.495958 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a056331-a1f1-442d-a003-b5bb0d573018-tigera-ca-bundle\") pod \"calico-typha-7d694b5fc4-hvbpt\" (UID: \"2a056331-a1f1-442d-a003-b5bb0d573018\") " pod="calico-system/calico-typha-7d694b5fc4-hvbpt" Jan 14 06:37:34.510140 systemd[1]: Created slice kubepods-besteffort-pod2a056331_a1f1_442d_a003_b5bb0d573018.slice - libcontainer container kubepods-besteffort-pod2a056331_a1f1_442d_a003_b5bb0d573018.slice. Jan 14 06:37:34.529000 audit[3377]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:34.542293 kernel: audit: type=1325 audit(1768372654.529:540): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:34.529000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc00bc39c0 a2=0 a3=7ffc00bc39ac items=0 ppid=3073 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:34.555296 kernel: audit: type=1300 audit(1768372654.529:540): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc00bc39c0 a2=0 a3=7ffc00bc39ac items=0 ppid=3073 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:34.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:34.563306 kernel: audit: type=1327 audit(1768372654.529:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:34.545000 audit[3377]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:34.567307 kernel: audit: type=1325 audit(1768372654.545:541): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:34.545000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc00bc39c0 a2=0 a3=0 items=0 ppid=3073 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:34.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:34.824496 containerd[1642]: time="2026-01-14T06:37:34.823980192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d694b5fc4-hvbpt,Uid:2a056331-a1f1-442d-a003-b5bb0d573018,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:34.967331 containerd[1642]: time="2026-01-14T06:37:34.966957051Z" level=info msg="connecting to shim dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f" address="unix:///run/containerd/s/279791f20116831c2cb753d616b4041fa235ffd0711ff0e14a825a9ffbb9a053" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:35.055543 systemd[1]: Created slice kubepods-besteffort-pod3d8610e3_9256_4dae_9a84_a9510e9222c9.slice - libcontainer container kubepods-besteffort-pod3d8610e3_9256_4dae_9a84_a9510e9222c9.slice. Jan 14 06:37:35.083585 systemd[1]: Started cri-containerd-dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f.scope - libcontainer container dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f. Jan 14 06:37:35.101557 kubelet[2966]: I0114 06:37:35.101490 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-var-lib-calico\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.101808 kubelet[2966]: I0114 06:37:35.101578 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-var-run-calico\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.101808 kubelet[2966]: I0114 06:37:35.101661 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-policysync\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.101808 kubelet[2966]: I0114 06:37:35.101714 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-cni-net-dir\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.101808 kubelet[2966]: I0114 06:37:35.101757 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-flexvol-driver-host\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.101808 kubelet[2966]: I0114 06:37:35.101804 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-xtables-lock\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102070 kubelet[2966]: I0114 06:37:35.101842 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-cni-bin-dir\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102070 kubelet[2966]: I0114 06:37:35.101870 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-lib-modules\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102070 kubelet[2966]: I0114 06:37:35.101969 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3d8610e3-9256-4dae-9a84-a9510e9222c9-cni-log-dir\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102070 kubelet[2966]: I0114 06:37:35.102041 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d8610e3-9256-4dae-9a84-a9510e9222c9-tigera-ca-bundle\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102263 kubelet[2966]: I0114 06:37:35.102074 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3d8610e3-9256-4dae-9a84-a9510e9222c9-node-certs\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.102263 kubelet[2966]: I0114 06:37:35.102102 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmwzr\" (UniqueName: \"kubernetes.io/projected/3d8610e3-9256-4dae-9a84-a9510e9222c9-kube-api-access-xmwzr\") pod \"calico-node-smm6n\" (UID: \"3d8610e3-9256-4dae-9a84-a9510e9222c9\") " pod="calico-system/calico-node-smm6n" Jan 14 06:37:35.152000 audit: BPF prog-id=155 op=LOAD Jan 14 06:37:35.153000 audit: BPF prog-id=156 op=LOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=156 op=UNLOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=157 op=LOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=158 op=LOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=158 op=UNLOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=157 op=UNLOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.153000 audit: BPF prog-id=159 op=LOAD Jan 14 06:37:35.153000 audit[3400]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3388 pid=3400 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464613364326261363063626230343764663131353236396130356538 Jan 14 06:37:35.210220 kubelet[2966]: E0114 06:37:35.208871 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:35.223446 kubelet[2966]: E0114 06:37:35.223375 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.237746 kubelet[2966]: W0114 06:37:35.237626 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.239676 kubelet[2966]: E0114 06:37:35.239134 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.239676 kubelet[2966]: E0114 06:37:35.239508 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.239676 kubelet[2966]: W0114 06:37:35.239541 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.239676 kubelet[2966]: E0114 06:37:35.239572 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.258104 kubelet[2966]: E0114 06:37:35.258064 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.258264 kubelet[2966]: W0114 06:37:35.258112 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.258264 kubelet[2966]: E0114 06:37:35.258147 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.304306 kubelet[2966]: E0114 06:37:35.304087 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.304306 kubelet[2966]: W0114 06:37:35.304122 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.304306 kubelet[2966]: E0114 06:37:35.304154 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.310554 kubelet[2966]: E0114 06:37:35.310334 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.310554 kubelet[2966]: W0114 06:37:35.310480 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.310554 kubelet[2966]: E0114 06:37:35.310499 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.312786 kubelet[2966]: E0114 06:37:35.312516 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.312786 kubelet[2966]: W0114 06:37:35.312535 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.312786 kubelet[2966]: E0114 06:37:35.312551 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.313806 kubelet[2966]: E0114 06:37:35.313788 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.314000 kubelet[2966]: W0114 06:37:35.313881 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.314000 kubelet[2966]: E0114 06:37:35.313906 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.314769 kubelet[2966]: E0114 06:37:35.314714 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.315186 kubelet[2966]: W0114 06:37:35.314744 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.315186 kubelet[2966]: E0114 06:37:35.315063 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.315708 kubelet[2966]: E0114 06:37:35.315653 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.315988 kubelet[2966]: W0114 06:37:35.315966 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.316725 kubelet[2966]: E0114 06:37:35.316311 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.317096 kubelet[2966]: E0114 06:37:35.317077 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.317208 kubelet[2966]: W0114 06:37:35.317187 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.317858 kubelet[2966]: E0114 06:37:35.317696 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.318689 kubelet[2966]: E0114 06:37:35.318402 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.318689 kubelet[2966]: W0114 06:37:35.318421 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.318689 kubelet[2966]: E0114 06:37:35.318451 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.321012 kubelet[2966]: E0114 06:37:35.320607 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.321012 kubelet[2966]: W0114 06:37:35.320624 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.321012 kubelet[2966]: E0114 06:37:35.320640 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.321012 kubelet[2966]: E0114 06:37:35.320881 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.321012 kubelet[2966]: W0114 06:37:35.320895 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.321012 kubelet[2966]: E0114 06:37:35.320952 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.321986 kubelet[2966]: E0114 06:37:35.321570 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.321986 kubelet[2966]: W0114 06:37:35.321587 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.321986 kubelet[2966]: E0114 06:37:35.321604 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.321986 kubelet[2966]: E0114 06:37:35.321842 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.321986 kubelet[2966]: W0114 06:37:35.321855 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.321986 kubelet[2966]: E0114 06:37:35.321868 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.323441 kubelet[2966]: E0114 06:37:35.322627 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.323441 kubelet[2966]: W0114 06:37:35.323221 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.323441 kubelet[2966]: E0114 06:37:35.323243 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.323817 kubelet[2966]: E0114 06:37:35.323789 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.323944 kubelet[2966]: W0114 06:37:35.323915 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.324349 kubelet[2966]: E0114 06:37:35.324135 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.325569 kubelet[2966]: E0114 06:37:35.325492 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.325569 kubelet[2966]: W0114 06:37:35.325510 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.326008 kubelet[2966]: E0114 06:37:35.325530 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.326587 kubelet[2966]: E0114 06:37:35.326407 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.326587 kubelet[2966]: W0114 06:37:35.326426 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.326587 kubelet[2966]: E0114 06:37:35.326454 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.328647 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.331683 kubelet[2966]: W0114 06:37:35.328670 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.328688 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.330827 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.331683 kubelet[2966]: W0114 06:37:35.330843 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.330858 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.331391 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.331683 kubelet[2966]: W0114 06:37:35.331405 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.331683 kubelet[2966]: E0114 06:37:35.331419 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.332725 kubelet[2966]: E0114 06:37:35.332648 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.332725 kubelet[2966]: W0114 06:37:35.332666 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.332725 kubelet[2966]: E0114 06:37:35.332682 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.336170 kubelet[2966]: E0114 06:37:35.334428 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.336170 kubelet[2966]: W0114 06:37:35.334463 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.336170 kubelet[2966]: E0114 06:37:35.334491 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.336170 kubelet[2966]: I0114 06:37:35.334536 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a91d79c6-e300-47ea-a44e-e654a57c8864-socket-dir\") pod \"csi-node-driver-8q82z\" (UID: \"a91d79c6-e300-47ea-a44e-e654a57c8864\") " pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:35.345754 kubelet[2966]: E0114 06:37:35.345706 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.345754 kubelet[2966]: W0114 06:37:35.345743 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.345754 kubelet[2966]: E0114 06:37:35.345788 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.346621 kubelet[2966]: E0114 06:37:35.346181 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.346621 kubelet[2966]: W0114 06:37:35.346211 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.346621 kubelet[2966]: E0114 06:37:35.346242 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.347220 kubelet[2966]: E0114 06:37:35.346826 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.347220 kubelet[2966]: W0114 06:37:35.346840 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.347220 kubelet[2966]: E0114 06:37:35.346855 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.347220 kubelet[2966]: I0114 06:37:35.347104 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhn5x\" (UniqueName: \"kubernetes.io/projected/a91d79c6-e300-47ea-a44e-e654a57c8864-kube-api-access-bhn5x\") pod \"csi-node-driver-8q82z\" (UID: \"a91d79c6-e300-47ea-a44e-e654a57c8864\") " pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:35.347947 kubelet[2966]: E0114 06:37:35.347887 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.347947 kubelet[2966]: W0114 06:37:35.347905 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.348187 kubelet[2966]: E0114 06:37:35.348059 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.349294 kubelet[2966]: E0114 06:37:35.348927 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.349294 kubelet[2966]: W0114 06:37:35.348946 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.349600 kubelet[2966]: E0114 06:37:35.349572 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.351768 kubelet[2966]: E0114 06:37:35.350939 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.351768 kubelet[2966]: W0114 06:37:35.351002 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.351768 kubelet[2966]: E0114 06:37:35.351033 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.351768 kubelet[2966]: I0114 06:37:35.351118 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a91d79c6-e300-47ea-a44e-e654a57c8864-varrun\") pod \"csi-node-driver-8q82z\" (UID: \"a91d79c6-e300-47ea-a44e-e654a57c8864\") " pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:35.351768 kubelet[2966]: E0114 06:37:35.351487 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.351768 kubelet[2966]: W0114 06:37:35.351502 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.351768 kubelet[2966]: E0114 06:37:35.351532 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.351768 kubelet[2966]: I0114 06:37:35.351559 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a91d79c6-e300-47ea-a44e-e654a57c8864-registration-dir\") pod \"csi-node-driver-8q82z\" (UID: \"a91d79c6-e300-47ea-a44e-e654a57c8864\") " pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:35.353437 kubelet[2966]: E0114 06:37:35.353383 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.353437 kubelet[2966]: W0114 06:37:35.353407 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.353437 kubelet[2966]: E0114 06:37:35.353434 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.353858 kubelet[2966]: E0114 06:37:35.353650 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.353858 kubelet[2966]: W0114 06:37:35.353663 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.353858 kubelet[2966]: E0114 06:37:35.353678 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.355611 kubelet[2966]: E0114 06:37:35.355439 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.355611 kubelet[2966]: W0114 06:37:35.355476 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.355611 kubelet[2966]: E0114 06:37:35.355501 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.355611 kubelet[2966]: I0114 06:37:35.355545 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a91d79c6-e300-47ea-a44e-e654a57c8864-kubelet-dir\") pod \"csi-node-driver-8q82z\" (UID: \"a91d79c6-e300-47ea-a44e-e654a57c8864\") " pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:35.356203 kubelet[2966]: E0114 06:37:35.355948 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.356203 kubelet[2966]: W0114 06:37:35.355965 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.356203 kubelet[2966]: E0114 06:37:35.356006 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.356814 kubelet[2966]: E0114 06:37:35.356378 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.356814 kubelet[2966]: W0114 06:37:35.356393 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.356814 kubelet[2966]: E0114 06:37:35.356417 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.357972 kubelet[2966]: E0114 06:37:35.357948 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.357972 kubelet[2966]: W0114 06:37:35.357969 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.358540 kubelet[2966]: E0114 06:37:35.357986 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.358540 kubelet[2966]: E0114 06:37:35.358474 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.358540 kubelet[2966]: W0114 06:37:35.358490 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.358540 kubelet[2966]: E0114 06:37:35.358504 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.368224 containerd[1642]: time="2026-01-14T06:37:35.367986976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smm6n,Uid:3d8610e3-9256-4dae-9a84-a9510e9222c9,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:35.370553 containerd[1642]: time="2026-01-14T06:37:35.370498919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d694b5fc4-hvbpt,Uid:2a056331-a1f1-442d-a003-b5bb0d573018,Namespace:calico-system,Attempt:0,} returns sandbox id \"dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f\"" Jan 14 06:37:35.377659 containerd[1642]: time="2026-01-14T06:37:35.377373382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 06:37:35.419704 containerd[1642]: time="2026-01-14T06:37:35.419629871Z" level=info msg="connecting to shim 73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d" address="unix:///run/containerd/s/3cf2694c95b84c7fafa246060c9313c44a32403f10245d2b8d9cc520439367c3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:37:35.458935 kubelet[2966]: E0114 06:37:35.458596 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.458935 kubelet[2966]: W0114 06:37:35.458627 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.458935 kubelet[2966]: E0114 06:37:35.458658 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.460039 kubelet[2966]: E0114 06:37:35.459986 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.460039 kubelet[2966]: W0114 06:37:35.460005 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.460624 kubelet[2966]: E0114 06:37:35.460514 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.461260 kubelet[2966]: E0114 06:37:35.460687 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.461260 kubelet[2966]: W0114 06:37:35.460703 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.461260 kubelet[2966]: E0114 06:37:35.460761 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.461260 kubelet[2966]: E0114 06:37:35.461105 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.461260 kubelet[2966]: W0114 06:37:35.461125 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.462661 kubelet[2966]: E0114 06:37:35.461349 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.463288 kubelet[2966]: E0114 06:37:35.463250 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.463396 kubelet[2966]: W0114 06:37:35.463375 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.463681 kubelet[2966]: E0114 06:37:35.463492 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.464997 kubelet[2966]: E0114 06:37:35.464976 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.465254 kubelet[2966]: W0114 06:37:35.465102 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.465254 kubelet[2966]: E0114 06:37:35.465129 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.465662 kubelet[2966]: E0114 06:37:35.465523 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.465662 kubelet[2966]: W0114 06:37:35.465542 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.465662 kubelet[2966]: E0114 06:37:35.465559 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.466165 kubelet[2966]: E0114 06:37:35.465976 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.466165 kubelet[2966]: W0114 06:37:35.465994 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.466165 kubelet[2966]: E0114 06:37:35.466009 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.466619 kubelet[2966]: E0114 06:37:35.466520 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.466619 kubelet[2966]: W0114 06:37:35.466538 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.466619 kubelet[2966]: E0114 06:37:35.466554 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.467440 kubelet[2966]: E0114 06:37:35.467420 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.467598 kubelet[2966]: W0114 06:37:35.467539 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.467831 kubelet[2966]: E0114 06:37:35.467695 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.468056 kubelet[2966]: E0114 06:37:35.468036 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.468304 kubelet[2966]: W0114 06:37:35.468135 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.468304 kubelet[2966]: E0114 06:37:35.468159 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.468565 systemd[1]: Started cri-containerd-73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d.scope - libcontainer container 73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d. Jan 14 06:37:35.470114 kubelet[2966]: E0114 06:37:35.469434 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.470114 kubelet[2966]: W0114 06:37:35.469449 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.470114 kubelet[2966]: E0114 06:37:35.469467 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.470114 kubelet[2966]: E0114 06:37:35.469716 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.470114 kubelet[2966]: W0114 06:37:35.469730 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.470114 kubelet[2966]: E0114 06:37:35.469744 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.470551 kubelet[2966]: E0114 06:37:35.470532 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.471088 kubelet[2966]: W0114 06:37:35.470913 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.471088 kubelet[2966]: E0114 06:37:35.470940 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.471474 kubelet[2966]: E0114 06:37:35.471398 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.471474 kubelet[2966]: W0114 06:37:35.471416 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.471474 kubelet[2966]: E0114 06:37:35.471431 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.472068 kubelet[2966]: E0114 06:37:35.472002 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.472068 kubelet[2966]: W0114 06:37:35.472034 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.472405 kubelet[2966]: E0114 06:37:35.472166 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.472791 kubelet[2966]: E0114 06:37:35.472773 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.472992 kubelet[2966]: W0114 06:37:35.472915 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.472992 kubelet[2966]: E0114 06:37:35.472946 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.473569 kubelet[2966]: E0114 06:37:35.473533 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.473802 kubelet[2966]: W0114 06:37:35.473664 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.473802 kubelet[2966]: E0114 06:37:35.473688 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.474264 kubelet[2966]: E0114 06:37:35.474110 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.474264 kubelet[2966]: W0114 06:37:35.474128 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.474264 kubelet[2966]: E0114 06:37:35.474144 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.474701 kubelet[2966]: E0114 06:37:35.474626 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.474701 kubelet[2966]: W0114 06:37:35.474643 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.474701 kubelet[2966]: E0114 06:37:35.474659 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.475308 kubelet[2966]: E0114 06:37:35.475202 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.475308 kubelet[2966]: W0114 06:37:35.475256 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.475535 kubelet[2966]: E0114 06:37:35.475419 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.476326 kubelet[2966]: E0114 06:37:35.476306 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.476523 kubelet[2966]: W0114 06:37:35.476445 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.476523 kubelet[2966]: E0114 06:37:35.476470 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.478421 kubelet[2966]: E0114 06:37:35.478392 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.478609 kubelet[2966]: W0114 06:37:35.478538 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.478609 kubelet[2966]: E0114 06:37:35.478565 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.479221 kubelet[2966]: E0114 06:37:35.479157 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.479221 kubelet[2966]: W0114 06:37:35.479175 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.479221 kubelet[2966]: E0114 06:37:35.479191 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.480209 kubelet[2966]: E0114 06:37:35.480112 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.480209 kubelet[2966]: W0114 06:37:35.480130 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.480209 kubelet[2966]: E0114 06:37:35.480145 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.514481 kubelet[2966]: E0114 06:37:35.514425 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:35.515189 kubelet[2966]: W0114 06:37:35.514568 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:35.515189 kubelet[2966]: E0114 06:37:35.514604 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:35.527000 audit: BPF prog-id=160 op=LOAD Jan 14 06:37:35.529000 audit: BPF prog-id=161 op=LOAD Jan 14 06:37:35.529000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174238 a2=98 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.529000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.530000 audit: BPF prog-id=161 op=UNLOAD Jan 14 06:37:35.530000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.530000 audit: BPF prog-id=162 op=LOAD Jan 14 06:37:35.530000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000174488 a2=98 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.530000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.531000 audit: BPF prog-id=163 op=LOAD Jan 14 06:37:35.531000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000174218 a2=98 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.531000 audit: BPF prog-id=163 op=UNLOAD Jan 14 06:37:35.531000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.531000 audit: BPF prog-id=162 op=UNLOAD Jan 14 06:37:35.531000 audit[3488]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.531000 audit: BPF prog-id=164 op=LOAD Jan 14 06:37:35.531000 audit[3488]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001746e8 a2=98 a3=0 items=0 ppid=3478 pid=3488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.531000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733636661326635626162343837386330623462373765366333333465 Jan 14 06:37:35.566380 containerd[1642]: time="2026-01-14T06:37:35.566256993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-smm6n,Uid:3d8610e3-9256-4dae-9a84-a9510e9222c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\"" Jan 14 06:37:35.578000 audit[3543]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:35.578000 audit[3543]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc55febd80 a2=0 a3=7ffc55febd6c items=0 ppid=3073 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:35.582000 audit[3543]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3543 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:37:35.582000 audit[3543]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc55febd80 a2=0 a3=0 items=0 ppid=3073 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:35.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:37:36.664035 kubelet[2966]: E0114 06:37:36.663904 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:36.993397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount112711647.mount: Deactivated successfully. Jan 14 06:37:38.678220 kubelet[2966]: E0114 06:37:38.678140 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:39.015373 containerd[1642]: time="2026-01-14T06:37:39.014442942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:39.016086 containerd[1642]: time="2026-01-14T06:37:39.015660574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 06:37:39.017402 containerd[1642]: time="2026-01-14T06:37:39.016439652Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:39.019938 containerd[1642]: time="2026-01-14T06:37:39.019862345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:39.021099 containerd[1642]: time="2026-01-14T06:37:39.021050028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.64347886s" Jan 14 06:37:39.021297 containerd[1642]: time="2026-01-14T06:37:39.021245988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 06:37:39.022721 containerd[1642]: time="2026-01-14T06:37:39.022687315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 06:37:39.052161 containerd[1642]: time="2026-01-14T06:37:39.052104420Z" level=info msg="CreateContainer within sandbox \"dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 06:37:39.065810 containerd[1642]: time="2026-01-14T06:37:39.065726707Z" level=info msg="Container e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:39.077872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2869887434.mount: Deactivated successfully. Jan 14 06:37:39.102796 containerd[1642]: time="2026-01-14T06:37:39.102654090Z" level=info msg="CreateContainer within sandbox \"dda3d2ba60cbb047df115269a05e847d5736a06090190acb2c893cce3988a00f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b\"" Jan 14 06:37:39.104669 containerd[1642]: time="2026-01-14T06:37:39.104535328Z" level=info msg="StartContainer for \"e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b\"" Jan 14 06:37:39.106763 containerd[1642]: time="2026-01-14T06:37:39.106497249Z" level=info msg="connecting to shim e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b" address="unix:///run/containerd/s/279791f20116831c2cb753d616b4041fa235ffd0711ff0e14a825a9ffbb9a053" protocol=ttrpc version=3 Jan 14 06:37:39.192597 systemd[1]: Started cri-containerd-e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b.scope - libcontainer container e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b. Jan 14 06:37:39.221000 audit: BPF prog-id=165 op=LOAD Jan 14 06:37:39.227705 kernel: kauditd_printk_skb: 52 callbacks suppressed Jan 14 06:37:39.227823 kernel: audit: type=1334 audit(1768372659.221:560): prog-id=165 op=LOAD Jan 14 06:37:39.229000 audit: BPF prog-id=166 op=LOAD Jan 14 06:37:39.233345 kernel: audit: type=1334 audit(1768372659.229:561): prog-id=166 op=LOAD Jan 14 06:37:39.229000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.240842 kernel: audit: type=1300 audit(1768372659.229:561): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.240919 kernel: audit: type=1327 audit(1768372659.229:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.229000 audit: BPF prog-id=166 op=UNLOAD Jan 14 06:37:39.229000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.253723 kernel: audit: type=1334 audit(1768372659.229:562): prog-id=166 op=UNLOAD Jan 14 06:37:39.253804 kernel: audit: type=1300 audit(1768372659.229:562): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.258744 kernel: audit: type=1327 audit(1768372659.229:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.229000 audit: BPF prog-id=167 op=LOAD Jan 14 06:37:39.264181 kernel: audit: type=1334 audit(1768372659.229:563): prog-id=167 op=LOAD Jan 14 06:37:39.264263 kernel: audit: type=1300 audit(1768372659.229:563): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.270403 kernel: audit: type=1327 audit(1768372659.229:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.229000 audit: BPF prog-id=168 op=LOAD Jan 14 06:37:39.229000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.229000 audit: BPF prog-id=168 op=UNLOAD Jan 14 06:37:39.229000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.230000 audit: BPF prog-id=167 op=UNLOAD Jan 14 06:37:39.230000 audit[3554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.230000 audit: BPF prog-id=169 op=LOAD Jan 14 06:37:39.230000 audit[3554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3388 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:39.230000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535303033663931326139346632613662643763663637646165393337 Jan 14 06:37:39.346678 containerd[1642]: time="2026-01-14T06:37:39.346409316Z" level=info msg="StartContainer for \"e5003f912a94f2a6bd7cf67dae937f5c537d080067ce3b829c18effb10dd5b8b\" returns successfully" Jan 14 06:37:39.892999 kubelet[2966]: I0114 06:37:39.892881 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d694b5fc4-hvbpt" podStartSLOduration=2.246045784 podStartE2EDuration="5.89283855s" podCreationTimestamp="2026-01-14 06:37:34 +0000 UTC" firstStartedPulling="2026-01-14 06:37:35.375704386 +0000 UTC m=+25.980743460" lastFinishedPulling="2026-01-14 06:37:39.022497146 +0000 UTC m=+29.627536226" observedRunningTime="2026-01-14 06:37:39.892630006 +0000 UTC m=+30.497669112" watchObservedRunningTime="2026-01-14 06:37:39.89283855 +0000 UTC m=+30.497877644" Jan 14 06:37:39.964871 kubelet[2966]: E0114 06:37:39.964822 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.964871 kubelet[2966]: W0114 06:37:39.964863 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.965513 kubelet[2966]: E0114 06:37:39.964905 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.965513 kubelet[2966]: E0114 06:37:39.965184 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.965513 kubelet[2966]: W0114 06:37:39.965199 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.965513 kubelet[2966]: E0114 06:37:39.965214 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.965808 kubelet[2966]: E0114 06:37:39.965527 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.965808 kubelet[2966]: W0114 06:37:39.965541 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.965808 kubelet[2966]: E0114 06:37:39.965556 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.969338 kubelet[2966]: E0114 06:37:39.969306 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.969338 kubelet[2966]: W0114 06:37:39.969330 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.969463 kubelet[2966]: E0114 06:37:39.969347 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.969739 kubelet[2966]: E0114 06:37:39.969709 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.969812 kubelet[2966]: W0114 06:37:39.969754 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.969812 kubelet[2966]: E0114 06:37:39.969773 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.970122 kubelet[2966]: E0114 06:37:39.970102 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.970122 kubelet[2966]: W0114 06:37:39.970121 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.970239 kubelet[2966]: E0114 06:37:39.970139 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.970537 kubelet[2966]: E0114 06:37:39.970517 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.970537 kubelet[2966]: W0114 06:37:39.970536 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.970649 kubelet[2966]: E0114 06:37:39.970553 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.970819 kubelet[2966]: E0114 06:37:39.970800 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.970819 kubelet[2966]: W0114 06:37:39.970818 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.970914 kubelet[2966]: E0114 06:37:39.970834 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.971159 kubelet[2966]: E0114 06:37:39.971139 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.971159 kubelet[2966]: W0114 06:37:39.971157 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.971257 kubelet[2966]: E0114 06:37:39.971172 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.971474 kubelet[2966]: E0114 06:37:39.971456 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.971524 kubelet[2966]: W0114 06:37:39.971474 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.971524 kubelet[2966]: E0114 06:37:39.971490 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.971738 kubelet[2966]: E0114 06:37:39.971717 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.971738 kubelet[2966]: W0114 06:37:39.971736 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.971836 kubelet[2966]: E0114 06:37:39.971751 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.972002 kubelet[2966]: E0114 06:37:39.971983 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.972002 kubelet[2966]: W0114 06:37:39.972001 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.972106 kubelet[2966]: E0114 06:37:39.972016 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.972303 kubelet[2966]: E0114 06:37:39.972260 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.972303 kubelet[2966]: W0114 06:37:39.972300 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.972420 kubelet[2966]: E0114 06:37:39.972319 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.972573 kubelet[2966]: E0114 06:37:39.972554 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.972573 kubelet[2966]: W0114 06:37:39.972567 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.972828 kubelet[2966]: E0114 06:37:39.972580 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:39.972828 kubelet[2966]: E0114 06:37:39.972811 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:39.972828 kubelet[2966]: W0114 06:37:39.972823 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:39.972975 kubelet[2966]: E0114 06:37:39.972837 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.003758 kubelet[2966]: E0114 06:37:40.003436 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.003758 kubelet[2966]: W0114 06:37:40.003466 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.003758 kubelet[2966]: E0114 06:37:40.003503 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.004466 kubelet[2966]: E0114 06:37:40.004409 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.004739 kubelet[2966]: W0114 06:37:40.004598 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.004739 kubelet[2966]: E0114 06:37:40.004655 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.005169 kubelet[2966]: E0114 06:37:40.004988 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.005169 kubelet[2966]: W0114 06:37:40.005014 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.005169 kubelet[2966]: E0114 06:37:40.005059 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.005429 kubelet[2966]: E0114 06:37:40.005393 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.005429 kubelet[2966]: W0114 06:37:40.005409 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.005817 kubelet[2966]: E0114 06:37:40.005440 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.005817 kubelet[2966]: E0114 06:37:40.005755 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.006047 kubelet[2966]: W0114 06:37:40.005812 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.006047 kubelet[2966]: E0114 06:37:40.005850 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.006663 kubelet[2966]: E0114 06:37:40.006203 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.006663 kubelet[2966]: W0114 06:37:40.006222 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.006663 kubelet[2966]: E0114 06:37:40.006263 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.006663 kubelet[2966]: E0114 06:37:40.006548 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.006663 kubelet[2966]: W0114 06:37:40.006562 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.007856 kubelet[2966]: E0114 06:37:40.007780 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.007856 kubelet[2966]: W0114 06:37:40.007801 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.007856 kubelet[2966]: E0114 06:37:40.007845 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.008306 kubelet[2966]: E0114 06:37:40.008040 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.009441 kubelet[2966]: E0114 06:37:40.009415 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.009523 kubelet[2966]: W0114 06:37:40.009437 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.009523 kubelet[2966]: E0114 06:37:40.009471 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.010068 kubelet[2966]: E0114 06:37:40.010020 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.010068 kubelet[2966]: W0114 06:37:40.010049 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.016899 kubelet[2966]: E0114 06:37:40.016853 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.016899 kubelet[2966]: W0114 06:37:40.016892 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.018382 kubelet[2966]: E0114 06:37:40.018360 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.018513 kubelet[2966]: W0114 06:37:40.018382 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.018513 kubelet[2966]: E0114 06:37:40.018409 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.018513 kubelet[2966]: E0114 06:37:40.018466 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.019084 kubelet[2966]: E0114 06:37:40.018763 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.019084 kubelet[2966]: W0114 06:37:40.018787 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.019084 kubelet[2966]: E0114 06:37:40.018804 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.019924 kubelet[2966]: E0114 06:37:40.019523 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.019924 kubelet[2966]: W0114 06:37:40.019605 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.019924 kubelet[2966]: E0114 06:37:40.019663 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.020947 kubelet[2966]: E0114 06:37:40.020734 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.020947 kubelet[2966]: W0114 06:37:40.020842 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.020947 kubelet[2966]: E0114 06:37:40.020863 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.020947 kubelet[2966]: E0114 06:37:40.020890 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.021310 kubelet[2966]: E0114 06:37:40.021221 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.021379 kubelet[2966]: W0114 06:37:40.021314 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.021446 kubelet[2966]: E0114 06:37:40.021391 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.022000 kubelet[2966]: E0114 06:37:40.021785 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.022000 kubelet[2966]: W0114 06:37:40.021805 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.022000 kubelet[2966]: E0114 06:37:40.021831 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.023290 kubelet[2966]: E0114 06:37:40.022533 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.023290 kubelet[2966]: W0114 06:37:40.022585 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.023290 kubelet[2966]: E0114 06:37:40.022604 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.664635 kubelet[2966]: E0114 06:37:40.664516 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:40.873138 kubelet[2966]: I0114 06:37:40.872415 2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:37:40.878783 kubelet[2966]: E0114 06:37:40.878303 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.878783 kubelet[2966]: W0114 06:37:40.878340 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.878783 kubelet[2966]: E0114 06:37:40.878365 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.879484 kubelet[2966]: E0114 06:37:40.879327 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.879484 kubelet[2966]: W0114 06:37:40.879354 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.879484 kubelet[2966]: E0114 06:37:40.879373 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.880168 kubelet[2966]: E0114 06:37:40.880137 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.880313 kubelet[2966]: W0114 06:37:40.880268 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.880444 kubelet[2966]: E0114 06:37:40.880421 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.881611 kubelet[2966]: E0114 06:37:40.881458 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.881611 kubelet[2966]: W0114 06:37:40.881477 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.881611 kubelet[2966]: E0114 06:37:40.881493 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.882434 kubelet[2966]: E0114 06:37:40.882406 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.882660 kubelet[2966]: W0114 06:37:40.882480 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.882660 kubelet[2966]: E0114 06:37:40.882500 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.883253 kubelet[2966]: E0114 06:37:40.883216 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.883453 kubelet[2966]: W0114 06:37:40.883234 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.883453 kubelet[2966]: E0114 06:37:40.883406 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.883886 kubelet[2966]: E0114 06:37:40.883848 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.884059 kubelet[2966]: W0114 06:37:40.883866 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.884059 kubelet[2966]: E0114 06:37:40.884001 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.884427 kubelet[2966]: E0114 06:37:40.884399 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.884736 kubelet[2966]: W0114 06:37:40.884565 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.884736 kubelet[2966]: E0114 06:37:40.884591 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.885753 kubelet[2966]: E0114 06:37:40.885729 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.886102 kubelet[2966]: W0114 06:37:40.885828 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.886102 kubelet[2966]: E0114 06:37:40.885851 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.886577 kubelet[2966]: E0114 06:37:40.886485 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.886577 kubelet[2966]: W0114 06:37:40.886505 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.886861 kubelet[2966]: E0114 06:37:40.886702 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.887293 kubelet[2966]: E0114 06:37:40.887175 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.887293 kubelet[2966]: W0114 06:37:40.887202 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.887293 kubelet[2966]: E0114 06:37:40.887219 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.887908 kubelet[2966]: E0114 06:37:40.887833 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.887908 kubelet[2966]: W0114 06:37:40.887857 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.887908 kubelet[2966]: E0114 06:37:40.887876 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.888681 kubelet[2966]: E0114 06:37:40.888576 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.888681 kubelet[2966]: W0114 06:37:40.888611 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.888681 kubelet[2966]: E0114 06:37:40.888628 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.889227 kubelet[2966]: E0114 06:37:40.889147 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.889227 kubelet[2966]: W0114 06:37:40.889166 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.889227 kubelet[2966]: E0114 06:37:40.889183 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.889922 kubelet[2966]: E0114 06:37:40.889859 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.889922 kubelet[2966]: W0114 06:37:40.889878 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.890137 kubelet[2966]: E0114 06:37:40.890064 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.891607 containerd[1642]: time="2026-01-14T06:37:40.891529359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:40.893450 containerd[1642]: time="2026-01-14T06:37:40.893406314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 14 06:37:40.894193 containerd[1642]: time="2026-01-14T06:37:40.894124945Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:40.897860 containerd[1642]: time="2026-01-14T06:37:40.897802535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:40.900094 containerd[1642]: time="2026-01-14T06:37:40.900036076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.877309193s" Jan 14 06:37:40.900094 containerd[1642]: time="2026-01-14T06:37:40.900085381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 06:37:40.903988 containerd[1642]: time="2026-01-14T06:37:40.903932232Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 06:37:40.917713 kubelet[2966]: E0114 06:37:40.917541 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.919014 kubelet[2966]: W0114 06:37:40.918423 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.919782 kubelet[2966]: E0114 06:37:40.918460 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.920126 kubelet[2966]: E0114 06:37:40.920107 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.920346 kubelet[2966]: W0114 06:37:40.920228 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.920865 kubelet[2966]: E0114 06:37:40.920843 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.921203 kubelet[2966]: E0114 06:37:40.921004 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.921385 kubelet[2966]: W0114 06:37:40.921363 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.921612 kubelet[2966]: E0114 06:37:40.921593 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.922427 kubelet[2966]: E0114 06:37:40.922336 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.922427 kubelet[2966]: W0114 06:37:40.922355 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.922710 kubelet[2966]: E0114 06:37:40.922378 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.923424 kubelet[2966]: E0114 06:37:40.923364 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.923573 kubelet[2966]: W0114 06:37:40.923540 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.923957 kubelet[2966]: E0114 06:37:40.923823 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.924387 kubelet[2966]: E0114 06:37:40.924369 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.924651 kubelet[2966]: W0114 06:37:40.924507 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.924976 kubelet[2966]: E0114 06:37:40.924939 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.925625 kubelet[2966]: E0114 06:37:40.925548 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.925806 kubelet[2966]: W0114 06:37:40.925696 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.926115 kubelet[2966]: E0114 06:37:40.926092 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.928009 kubelet[2966]: E0114 06:37:40.927982 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.928309 kubelet[2966]: W0114 06:37:40.928128 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.928978 kubelet[2966]: E0114 06:37:40.928772 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.929322 kubelet[2966]: W0114 06:37:40.929181 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.929663 kubelet[2966]: E0114 06:37:40.929061 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.929663 kubelet[2966]: E0114 06:37:40.929393 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.930105 kubelet[2966]: E0114 06:37:40.930070 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.930347 kubelet[2966]: W0114 06:37:40.930222 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.930347 kubelet[2966]: E0114 06:37:40.930253 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.933873 kubelet[2966]: E0114 06:37:40.933799 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.934140 kubelet[2966]: W0114 06:37:40.934006 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.934140 kubelet[2966]: E0114 06:37:40.934036 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.934813 kubelet[2966]: E0114 06:37:40.934753 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.935118 kubelet[2966]: W0114 06:37:40.934774 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.935336 kubelet[2966]: E0114 06:37:40.935243 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.936083 kubelet[2966]: E0114 06:37:40.936064 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.936380 kubelet[2966]: W0114 06:37:40.936179 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.936581 kubelet[2966]: E0114 06:37:40.936521 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.937904 kubelet[2966]: E0114 06:37:40.937663 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.937904 kubelet[2966]: W0114 06:37:40.937694 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.938481 kubelet[2966]: E0114 06:37:40.938413 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.939986 kubelet[2966]: E0114 06:37:40.939346 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.939986 kubelet[2966]: W0114 06:37:40.939664 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.939986 kubelet[2966]: E0114 06:37:40.939778 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.942106 kubelet[2966]: E0114 06:37:40.942086 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.942309 kubelet[2966]: W0114 06:37:40.942253 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.942765 kubelet[2966]: E0114 06:37:40.942704 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.945977 kubelet[2966]: E0114 06:37:40.945621 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.945977 kubelet[2966]: W0114 06:37:40.945674 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.945977 kubelet[2966]: E0114 06:37:40.945700 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.946489 kubelet[2966]: E0114 06:37:40.946376 2966 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:37:40.946489 kubelet[2966]: W0114 06:37:40.946396 2966 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:37:40.946489 kubelet[2966]: E0114 06:37:40.946445 2966 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:37:40.957406 containerd[1642]: time="2026-01-14T06:37:40.957350127Z" level=info msg="Container 7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:40.975011 containerd[1642]: time="2026-01-14T06:37:40.974959994Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71\"" Jan 14 06:37:40.975918 containerd[1642]: time="2026-01-14T06:37:40.975751814Z" level=info msg="StartContainer for \"7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71\"" Jan 14 06:37:40.979720 containerd[1642]: time="2026-01-14T06:37:40.979684248Z" level=info msg="connecting to shim 7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71" address="unix:///run/containerd/s/3cf2694c95b84c7fafa246060c9313c44a32403f10245d2b8d9cc520439367c3" protocol=ttrpc version=3 Jan 14 06:37:41.047590 systemd[1]: Started cri-containerd-7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71.scope - libcontainer container 7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71. Jan 14 06:37:41.133000 audit: BPF prog-id=170 op=LOAD Jan 14 06:37:41.133000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=3478 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:41.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734383762366230316335666566356135363162316161623635633939 Jan 14 06:37:41.133000 audit: BPF prog-id=171 op=LOAD Jan 14 06:37:41.133000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=3478 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:41.133000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734383762366230316335666566356135363162316161623635633939 Jan 14 06:37:41.134000 audit: BPF prog-id=171 op=UNLOAD Jan 14 06:37:41.134000 audit[3661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:41.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734383762366230316335666566356135363162316161623635633939 Jan 14 06:37:41.134000 audit: BPF prog-id=170 op=UNLOAD Jan 14 06:37:41.134000 audit[3661]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:41.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734383762366230316335666566356135363162316161623635633939 Jan 14 06:37:41.134000 audit: BPF prog-id=172 op=LOAD Jan 14 06:37:41.134000 audit[3661]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=3478 pid=3661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:41.134000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734383762366230316335666566356135363162316161623635633939 Jan 14 06:37:41.182128 containerd[1642]: time="2026-01-14T06:37:41.181997427Z" level=info msg="StartContainer for \"7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71\" returns successfully" Jan 14 06:37:41.202327 systemd[1]: cri-containerd-7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71.scope: Deactivated successfully. Jan 14 06:37:41.207000 audit: BPF prog-id=172 op=UNLOAD Jan 14 06:37:41.239993 containerd[1642]: time="2026-01-14T06:37:41.239843363Z" level=info msg="received container exit event container_id:\"7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71\" id:\"7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71\" pid:3674 exited_at:{seconds:1768372661 nanos:209104742}" Jan 14 06:37:41.288661 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7487b6b01c5fef5a561b1aab65c996258a502049be57af3b8b48b8c847e46d71-rootfs.mount: Deactivated successfully. Jan 14 06:37:41.880298 containerd[1642]: time="2026-01-14T06:37:41.880197545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 06:37:42.664472 kubelet[2966]: E0114 06:37:42.664251 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:44.664333 kubelet[2966]: E0114 06:37:44.664091 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:46.664390 kubelet[2966]: E0114 06:37:46.664293 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:48.673310 kubelet[2966]: E0114 06:37:48.672930 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:48.824880 containerd[1642]: time="2026-01-14T06:37:48.824606975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:48.826003 containerd[1642]: time="2026-01-14T06:37:48.825736618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Jan 14 06:37:48.826789 containerd[1642]: time="2026-01-14T06:37:48.826753477Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:48.829971 containerd[1642]: time="2026-01-14T06:37:48.829896975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:37:48.831322 containerd[1642]: time="2026-01-14T06:37:48.830955752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.950538041s" Jan 14 06:37:48.831322 containerd[1642]: time="2026-01-14T06:37:48.831003598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 06:37:48.842293 containerd[1642]: time="2026-01-14T06:37:48.841994430Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 06:37:48.858406 containerd[1642]: time="2026-01-14T06:37:48.856466021Z" level=info msg="Container a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:37:48.870920 containerd[1642]: time="2026-01-14T06:37:48.870870660Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57\"" Jan 14 06:37:48.872414 containerd[1642]: time="2026-01-14T06:37:48.872329547Z" level=info msg="StartContainer for \"a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57\"" Jan 14 06:37:48.874558 containerd[1642]: time="2026-01-14T06:37:48.874419893Z" level=info msg="connecting to shim a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57" address="unix:///run/containerd/s/3cf2694c95b84c7fafa246060c9313c44a32403f10245d2b8d9cc520439367c3" protocol=ttrpc version=3 Jan 14 06:37:48.938561 systemd[1]: Started cri-containerd-a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57.scope - libcontainer container a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57. Jan 14 06:37:49.038953 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 06:37:49.039316 kernel: audit: type=1334 audit(1768372669.028:574): prog-id=173 op=LOAD Jan 14 06:37:49.028000 audit: BPF prog-id=173 op=LOAD Jan 14 06:37:49.028000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.045326 kernel: audit: type=1300 audit(1768372669.028:574): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.051300 kernel: audit: type=1327 audit(1768372669.028:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.028000 audit: BPF prog-id=174 op=LOAD Jan 14 06:37:49.053318 kernel: audit: type=1334 audit(1768372669.028:575): prog-id=174 op=LOAD Jan 14 06:37:49.028000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.059316 kernel: audit: type=1300 audit(1768372669.028:575): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.059416 kernel: audit: type=1327 audit(1768372669.028:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.028000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.036000 audit: BPF prog-id=174 op=UNLOAD Jan 14 06:37:49.065992 kernel: audit: type=1334 audit(1768372669.036:576): prog-id=174 op=UNLOAD Jan 14 06:37:49.036000 audit[3723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.072304 kernel: audit: type=1300 audit(1768372669.036:576): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.072702 kernel: audit: type=1327 audit(1768372669.036:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.036000 audit: BPF prog-id=173 op=UNLOAD Jan 14 06:37:49.078651 kernel: audit: type=1334 audit(1768372669.036:577): prog-id=173 op=UNLOAD Jan 14 06:37:49.036000 audit[3723]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.036000 audit: BPF prog-id=175 op=LOAD Jan 14 06:37:49.036000 audit[3723]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3478 pid=3723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:37:49.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133353837363563383564386461346162386438373762353535633236 Jan 14 06:37:49.109301 containerd[1642]: time="2026-01-14T06:37:49.109228841Z" level=info msg="StartContainer for \"a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57\" returns successfully" Jan 14 06:37:50.036167 systemd[1]: cri-containerd-a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57.scope: Deactivated successfully. Jan 14 06:37:50.036729 systemd[1]: cri-containerd-a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57.scope: Consumed 803ms CPU time, 163.1M memory peak, 3.9M read from disk, 171.3M written to disk. Jan 14 06:37:50.040000 audit: BPF prog-id=175 op=UNLOAD Jan 14 06:37:50.049349 containerd[1642]: time="2026-01-14T06:37:50.049293937Z" level=info msg="received container exit event container_id:\"a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57\" id:\"a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57\" pid:3735 exited_at:{seconds:1768372670 nanos:48189072}" Jan 14 06:37:50.106216 kubelet[2966]: I0114 06:37:50.103751 2966 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 06:37:50.138687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a358765c85d8da4ab8d877b555c260d58602f34ba2f08d1a0c16f37b6bef3d57-rootfs.mount: Deactivated successfully. Jan 14 06:37:50.211547 systemd[1]: Created slice kubepods-burstable-podf55e8add_9675_49f7_8240_772692184a74.slice - libcontainer container kubepods-burstable-podf55e8add_9675_49f7_8240_772692184a74.slice. Jan 14 06:37:50.256998 systemd[1]: Created slice kubepods-burstable-pod856b7d39_73f5_4a90_838f_5cffdb6afeaf.slice - libcontainer container kubepods-burstable-pod856b7d39_73f5_4a90_838f_5cffdb6afeaf.slice. Jan 14 06:37:50.274498 systemd[1]: Created slice kubepods-besteffort-podd6749f8c_3427_433c_a8c4_8f87f70b4d79.slice - libcontainer container kubepods-besteffort-podd6749f8c_3427_433c_a8c4_8f87f70b4d79.slice. Jan 14 06:37:50.300616 systemd[1]: Created slice kubepods-besteffort-podfcd4f250_b1e6_467c_90cd_24e53dcbe8e8.slice - libcontainer container kubepods-besteffort-podfcd4f250_b1e6_467c_90cd_24e53dcbe8e8.slice. Jan 14 06:37:50.308396 kubelet[2966]: I0114 06:37:50.308326 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h74t2\" (UniqueName: \"kubernetes.io/projected/a8d8745b-48bb-4a89-9b0b-07086983dbe4-kube-api-access-h74t2\") pod \"calico-apiserver-7bddfbd4b9-rswq7\" (UID: \"a8d8745b-48bb-4a89-9b0b-07086983dbe4\") " pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:37:50.312906 kubelet[2966]: I0114 06:37:50.309505 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/217872a4-2508-46c6-a68b-d9c0e654e8b7-config\") pod \"goldmane-666569f655-kv7ql\" (UID: \"217872a4-2508-46c6-a68b-d9c0e654e8b7\") " pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:50.312906 kubelet[2966]: I0114 06:37:50.312422 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/217872a4-2508-46c6-a68b-d9c0e654e8b7-goldmane-ca-bundle\") pod \"goldmane-666569f655-kv7ql\" (UID: \"217872a4-2508-46c6-a68b-d9c0e654e8b7\") " pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:50.317489 kubelet[2966]: W0114 06:37:50.317346 2966 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.322153 kubelet[2966]: I0114 06:37:50.320334 2966 status_manager.go:890] "Failed to get status for pod" podUID="1d15520d-da28-4e66-86ea-0828797c7224" pod="calico-system/whisker-75ff6db5b4-gqqh4" err="pods \"whisker-75ff6db5b4-gqqh4\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" Jan 14 06:37:50.324105 kubelet[2966]: I0114 06:37:50.321618 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79tgx\" (UniqueName: \"kubernetes.io/projected/fcd4f250-b1e6-467c-90cd-24e53dcbe8e8-kube-api-access-79tgx\") pod \"calico-kube-controllers-745df5bdfc-85fpc\" (UID: \"fcd4f250-b1e6-467c-90cd-24e53dcbe8e8\") " pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:37:50.324428 kubelet[2966]: I0114 06:37:50.324196 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwc9\" (UniqueName: \"kubernetes.io/projected/1d15520d-da28-4e66-86ea-0828797c7224-kube-api-access-djwc9\") pod \"whisker-75ff6db5b4-gqqh4\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:37:50.324428 kubelet[2966]: I0114 06:37:50.324338 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair\") pod \"whisker-75ff6db5b4-gqqh4\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:37:50.324982 kubelet[2966]: I0114 06:37:50.324492 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlfxg\" (UniqueName: \"kubernetes.io/projected/f55e8add-9675-49f7-8240-772692184a74-kube-api-access-wlfxg\") pod \"coredns-668d6bf9bc-l5sdz\" (UID: \"f55e8add-9675-49f7-8240-772692184a74\") " pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:37:50.324982 kubelet[2966]: I0114 06:37:50.324665 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd75s\" (UniqueName: \"kubernetes.io/projected/d6749f8c-3427-433c-a8c4-8f87f70b4d79-kube-api-access-jd75s\") pod \"calico-apiserver-7bddfbd4b9-ps4c2\" (UID: \"d6749f8c-3427-433c-a8c4-8f87f70b4d79\") " pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" Jan 14 06:37:50.324982 kubelet[2966]: I0114 06:37:50.324873 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/856b7d39-73f5-4a90-838f-5cffdb6afeaf-config-volume\") pod \"coredns-668d6bf9bc-qw9xh\" (UID: \"856b7d39-73f5-4a90-838f-5cffdb6afeaf\") " pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:37:50.325583 kubelet[2966]: I0114 06:37:50.325159 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle\") pod \"whisker-75ff6db5b4-gqqh4\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:37:50.325583 kubelet[2966]: I0114 06:37:50.325484 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f55e8add-9675-49f7-8240-772692184a74-config-volume\") pod \"coredns-668d6bf9bc-l5sdz\" (UID: \"f55e8add-9675-49f7-8240-772692184a74\") " pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:37:50.326590 kubelet[2966]: I0114 06:37:50.325527 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8d8745b-48bb-4a89-9b0b-07086983dbe4-calico-apiserver-certs\") pod \"calico-apiserver-7bddfbd4b9-rswq7\" (UID: \"a8d8745b-48bb-4a89-9b0b-07086983dbe4\") " pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:37:50.326590 kubelet[2966]: I0114 06:37:50.325690 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6749f8c-3427-433c-a8c4-8f87f70b4d79-calico-apiserver-certs\") pod \"calico-apiserver-7bddfbd4b9-ps4c2\" (UID: \"d6749f8c-3427-433c-a8c4-8f87f70b4d79\") " pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" Jan 14 06:37:50.326590 kubelet[2966]: I0114 06:37:50.326038 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknhx\" (UniqueName: \"kubernetes.io/projected/856b7d39-73f5-4a90-838f-5cffdb6afeaf-kube-api-access-tknhx\") pod \"coredns-668d6bf9bc-qw9xh\" (UID: \"856b7d39-73f5-4a90-838f-5cffdb6afeaf\") " pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:37:50.326590 kubelet[2966]: I0114 06:37:50.326432 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/217872a4-2508-46c6-a68b-d9c0e654e8b7-goldmane-key-pair\") pod \"goldmane-666569f655-kv7ql\" (UID: \"217872a4-2508-46c6-a68b-d9c0e654e8b7\") " pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:50.328875 kubelet[2966]: I0114 06:37:50.328771 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27lgr\" (UniqueName: \"kubernetes.io/projected/217872a4-2508-46c6-a68b-d9c0e654e8b7-kube-api-access-27lgr\") pod \"goldmane-666569f655-kv7ql\" (UID: \"217872a4-2508-46c6-a68b-d9c0e654e8b7\") " pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:50.329637 kubelet[2966]: I0114 06:37:50.329033 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcd4f250-b1e6-467c-90cd-24e53dcbe8e8-tigera-ca-bundle\") pod \"calico-kube-controllers-745df5bdfc-85fpc\" (UID: \"fcd4f250-b1e6-467c-90cd-24e53dcbe8e8\") " pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:37:50.329195 systemd[1]: Created slice kubepods-besteffort-pod1d15520d_da28_4e66_86ea_0828797c7224.slice - libcontainer container kubepods-besteffort-pod1d15520d_da28_4e66_86ea_0828797c7224.slice. Jan 14 06:37:50.330336 kubelet[2966]: W0114 06:37:50.319052 2966 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.330336 kubelet[2966]: E0114 06:37:50.330227 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.330640 kubelet[2966]: W0114 06:37:50.319117 2966 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.330640 kubelet[2966]: E0114 06:37:50.327236 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.330892 kubelet[2966]: W0114 06:37:50.319188 2966 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.331310 kubelet[2966]: E0114 06:37:50.331041 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.331310 kubelet[2966]: W0114 06:37:50.319255 2966 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.331662 kubelet[2966]: E0114 06:37:50.330558 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.331833 kubelet[2966]: E0114 06:37:50.331268 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.331833 kubelet[2966]: W0114 06:37:50.319762 2966 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.333107 kubelet[2966]: E0114 06:37:50.331969 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.334062 kubelet[2966]: W0114 06:37:50.320825 2966 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:srv-2u6n8.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object Jan 14 06:37:50.334062 kubelet[2966]: E0114 06:37:50.334023 2966 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:srv-2u6n8.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-2u6n8.gb1.brightbox.com' and this object" logger="UnhandledError" Jan 14 06:37:50.358377 systemd[1]: Created slice kubepods-besteffort-poda8d8745b_48bb_4a89_9b0b_07086983dbe4.slice - libcontainer container kubepods-besteffort-poda8d8745b_48bb_4a89_9b0b_07086983dbe4.slice. Jan 14 06:37:50.384359 systemd[1]: Created slice kubepods-besteffort-pod217872a4_2508_46c6_a68b_d9c0e654e8b7.slice - libcontainer container kubepods-besteffort-pod217872a4_2508_46c6_a68b_d9c0e654e8b7.slice. Jan 14 06:37:50.546891 containerd[1642]: time="2026-01-14T06:37:50.546829673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:50.570495 containerd[1642]: time="2026-01-14T06:37:50.569145045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,}" Jan 14 06:37:50.660754 containerd[1642]: time="2026-01-14T06:37:50.660695294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:50.678369 systemd[1]: Created slice kubepods-besteffort-poda91d79c6_e300_47ea_a44e_e654a57c8864.slice - libcontainer container kubepods-besteffort-poda91d79c6_e300_47ea_a44e_e654a57c8864.slice. Jan 14 06:37:50.693509 containerd[1642]: time="2026-01-14T06:37:50.693431370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:50.883701 containerd[1642]: time="2026-01-14T06:37:50.883470075Z" level=error msg="Failed to destroy network for sandbox \"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.890711 containerd[1642]: time="2026-01-14T06:37:50.890632487Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.892882 kubelet[2966]: E0114 06:37:50.892800 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.893206 kubelet[2966]: E0114 06:37:50.893094 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:37:50.894984 kubelet[2966]: E0114 06:37:50.894942 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:37:50.895223 kubelet[2966]: E0114 06:37:50.895171 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qw9xh_kube-system(856b7d39-73f5-4a90-838f-5cffdb6afeaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qw9xh_kube-system(856b7d39-73f5-4a90-838f-5cffdb6afeaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8de734a529fe20ffcb3a564615ea9e3c5ce2be79d0b0dac24804088c7cf77223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qw9xh" podUID="856b7d39-73f5-4a90-838f-5cffdb6afeaf" Jan 14 06:37:50.906668 containerd[1642]: time="2026-01-14T06:37:50.906599658Z" level=error msg="Failed to destroy network for sandbox \"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.909330 containerd[1642]: time="2026-01-14T06:37:50.909232947Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.910329 kubelet[2966]: E0114 06:37:50.909918 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.910329 kubelet[2966]: E0114 06:37:50.910026 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:37:50.910329 kubelet[2966]: E0114 06:37:50.910060 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:37:50.910582 kubelet[2966]: E0114 06:37:50.910126 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3364d70d95cba736cca8ae8c5410c6387b54cadbe5f108cc3963b41728da0f3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:37:50.914704 containerd[1642]: time="2026-01-14T06:37:50.914651346Z" level=error msg="Failed to destroy network for sandbox \"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.917102 containerd[1642]: time="2026-01-14T06:37:50.916928592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.920480 kubelet[2966]: E0114 06:37:50.918621 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.920480 kubelet[2966]: E0114 06:37:50.919369 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:37:50.920480 kubelet[2966]: E0114 06:37:50.919415 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:37:50.920708 kubelet[2966]: E0114 06:37:50.919482 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l5sdz_kube-system(f55e8add-9675-49f7-8240-772692184a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l5sdz_kube-system(f55e8add-9675-49f7-8240-772692184a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bb4248b953f06f2791bab2c2ca7955df6949ba25c7e7bce445b370a475eefc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l5sdz" podUID="f55e8add-9675-49f7-8240-772692184a74" Jan 14 06:37:50.923855 containerd[1642]: time="2026-01-14T06:37:50.923810158Z" level=error msg="Failed to destroy network for sandbox \"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.926900 containerd[1642]: time="2026-01-14T06:37:50.926599634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.927471 kubelet[2966]: E0114 06:37:50.927242 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:50.927471 kubelet[2966]: E0114 06:37:50.927360 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:50.927471 kubelet[2966]: E0114 06:37:50.927411 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q82z" Jan 14 06:37:50.927645 kubelet[2966]: E0114 06:37:50.927470 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a38f06fef4eca327ad671e6e2dae418f7238f1e1509e82df2aa806b0f6a8adb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:37:50.966413 containerd[1642]: time="2026-01-14T06:37:50.966327878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 06:37:51.434439 kubelet[2966]: E0114 06:37:51.432958 2966 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.434439 kubelet[2966]: E0114 06:37:51.433148 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle podName:1d15520d-da28-4e66-86ea-0828797c7224 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.933098126 +0000 UTC m=+42.538137213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle") pod "whisker-75ff6db5b4-gqqh4" (UID: "1d15520d-da28-4e66-86ea-0828797c7224") : failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.434439 kubelet[2966]: E0114 06:37:51.433580 2966 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.434439 kubelet[2966]: E0114 06:37:51.434375 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair podName:1d15520d-da28-4e66-86ea-0828797c7224 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.934348909 +0000 UTC m=+42.539387990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair") pod "whisker-75ff6db5b4-gqqh4" (UID: "1d15520d-da28-4e66-86ea-0828797c7224") : failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.435876 kubelet[2966]: E0114 06:37:51.435446 2966 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.435876 kubelet[2966]: E0114 06:37:51.435500 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6749f8c-3427-433c-a8c4-8f87f70b4d79-calico-apiserver-certs podName:d6749f8c-3427-433c-a8c4-8f87f70b4d79 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.935486769 +0000 UTC m=+42.540525845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/d6749f8c-3427-433c-a8c4-8f87f70b4d79-calico-apiserver-certs") pod "calico-apiserver-7bddfbd4b9-ps4c2" (UID: "d6749f8c-3427-433c-a8c4-8f87f70b4d79") : failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.435876 kubelet[2966]: E0114 06:37:51.435536 2966 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.435876 kubelet[2966]: E0114 06:37:51.435622 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8d8745b-48bb-4a89-9b0b-07086983dbe4-calico-apiserver-certs podName:a8d8745b-48bb-4a89-9b0b-07086983dbe4 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.935609204 +0000 UTC m=+42.540648279 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a8d8745b-48bb-4a89-9b0b-07086983dbe4-calico-apiserver-certs") pod "calico-apiserver-7bddfbd4b9-rswq7" (UID: "a8d8745b-48bb-4a89-9b0b-07086983dbe4") : failed to sync secret cache: timed out waiting for the condition Jan 14 06:37:51.471901 kubelet[2966]: E0114 06:37:51.471708 2966 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.471901 kubelet[2966]: E0114 06:37:51.471780 2966 projected.go:194] Error preparing data for projected volume kube-api-access-jd75s for pod calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.472609 kubelet[2966]: E0114 06:37:51.472563 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6749f8c-3427-433c-a8c4-8f87f70b4d79-kube-api-access-jd75s podName:d6749f8c-3427-433c-a8c4-8f87f70b4d79 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.971910798 +0000 UTC m=+42.576949872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jd75s" (UniqueName: "kubernetes.io/projected/d6749f8c-3427-433c-a8c4-8f87f70b4d79-kube-api-access-jd75s") pod "calico-apiserver-7bddfbd4b9-ps4c2" (UID: "d6749f8c-3427-433c-a8c4-8f87f70b4d79") : failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.485925 kubelet[2966]: E0114 06:37:51.485811 2966 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.486485 kubelet[2966]: E0114 06:37:51.486249 2966 projected.go:194] Error preparing data for projected volume kube-api-access-h74t2 for pod calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7: failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.486485 kubelet[2966]: E0114 06:37:51.486421 2966 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a8d8745b-48bb-4a89-9b0b-07086983dbe4-kube-api-access-h74t2 podName:a8d8745b-48bb-4a89-9b0b-07086983dbe4 nodeName:}" failed. No retries permitted until 2026-01-14 06:37:51.986390302 +0000 UTC m=+42.591429396 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-h74t2" (UniqueName: "kubernetes.io/projected/a8d8745b-48bb-4a89-9b0b-07086983dbe4-kube-api-access-h74t2") pod "calico-apiserver-7bddfbd4b9-rswq7" (UID: "a8d8745b-48bb-4a89-9b0b-07086983dbe4") : failed to sync configmap cache: timed out waiting for the condition Jan 14 06:37:51.592798 containerd[1642]: time="2026-01-14T06:37:51.592674874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kv7ql,Uid:217872a4-2508-46c6-a68b-d9c0e654e8b7,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:51.674719 containerd[1642]: time="2026-01-14T06:37:51.674620539Z" level=error msg="Failed to destroy network for sandbox \"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:51.678093 containerd[1642]: time="2026-01-14T06:37:51.677905908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kv7ql,Uid:217872a4-2508-46c6-a68b-d9c0e654e8b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:51.679495 systemd[1]: run-netns-cni\x2d8696104e\x2d8683\x2d0a9a\x2d6426\x2dea3276237dad.mount: Deactivated successfully. Jan 14 06:37:51.681893 kubelet[2966]: E0114 06:37:51.681026 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:51.681893 kubelet[2966]: E0114 06:37:51.681429 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:51.681893 kubelet[2966]: E0114 06:37:51.681500 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kv7ql" Jan 14 06:37:51.682402 kubelet[2966]: E0114 06:37:51.681609 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10415b59a6328be1d6428594ece3ca62d875f12ab475db2c28f9342f83122122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:37:52.086491 containerd[1642]: time="2026-01-14T06:37:52.086419391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-ps4c2,Uid:d6749f8c-3427-433c-a8c4-8f87f70b4d79,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:37:52.143709 containerd[1642]: time="2026-01-14T06:37:52.143418509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75ff6db5b4-gqqh4,Uid:1d15520d-da28-4e66-86ea-0828797c7224,Namespace:calico-system,Attempt:0,}" Jan 14 06:37:52.173750 containerd[1642]: time="2026-01-14T06:37:52.173695198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:37:52.205792 containerd[1642]: time="2026-01-14T06:37:52.205688992Z" level=error msg="Failed to destroy network for sandbox \"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.209712 systemd[1]: run-netns-cni\x2da7987d23\x2ddb5c\x2dbf06\x2d490a\x2db864f43b27eb.mount: Deactivated successfully. Jan 14 06:37:52.213367 containerd[1642]: time="2026-01-14T06:37:52.213319120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-ps4c2,Uid:d6749f8c-3427-433c-a8c4-8f87f70b4d79,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.214111 kubelet[2966]: E0114 06:37:52.213985 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.214111 kubelet[2966]: E0114 06:37:52.214081 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" Jan 14 06:37:52.214401 kubelet[2966]: E0114 06:37:52.214127 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" Jan 14 06:37:52.214401 kubelet[2966]: E0114 06:37:52.214200 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4096a04a9f345f7a600e5f10e6a407db4ddb8020ce39f58222ee0188a6a6c9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:37:52.267532 containerd[1642]: time="2026-01-14T06:37:52.267226308Z" level=error msg="Failed to destroy network for sandbox \"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.271988 systemd[1]: run-netns-cni\x2dabf77497\x2d9fb5\x2de64f\x2d300b\x2d29bf83db53f7.mount: Deactivated successfully. Jan 14 06:37:52.273305 containerd[1642]: time="2026-01-14T06:37:52.272364853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75ff6db5b4-gqqh4,Uid:1d15520d-da28-4e66-86ea-0828797c7224,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.273470 kubelet[2966]: E0114 06:37:52.272751 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.273470 kubelet[2966]: E0114 06:37:52.272855 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:37:52.273470 kubelet[2966]: E0114 06:37:52.272888 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:37:52.273619 kubelet[2966]: E0114 06:37:52.272985 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75ff6db5b4-gqqh4_calico-system(1d15520d-da28-4e66-86ea-0828797c7224)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75ff6db5b4-gqqh4_calico-system(1d15520d-da28-4e66-86ea-0828797c7224)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"861e9fdd028d962cf9932896432b77dd48b2329c40490267120bb624521763f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75ff6db5b4-gqqh4" podUID="1d15520d-da28-4e66-86ea-0828797c7224" Jan 14 06:37:52.294244 containerd[1642]: time="2026-01-14T06:37:52.294174991Z" level=error msg="Failed to destroy network for sandbox \"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.296983 containerd[1642]: time="2026-01-14T06:37:52.296915255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.297416 kubelet[2966]: E0114 06:37:52.297356 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:37:52.297570 kubelet[2966]: E0114 06:37:52.297458 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:37:52.297570 kubelet[2966]: E0114 06:37:52.297492 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:37:52.297789 kubelet[2966]: E0114 06:37:52.297575 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2d01b82fc76fe69498f425adef05c7086ef4655fd5ad6ac36bae7b62a065c0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:37:53.141985 systemd[1]: run-netns-cni\x2d1ad4902b\x2d6f8f\x2d8001\x2d12fd\x2d80afb48d111a.mount: Deactivated successfully. Jan 14 06:37:53.414931 systemd[1]: Started sshd@12-10.230.41.14:22-64.225.73.213:59746.service - OpenSSH per-connection server daemon (64.225.73.213:59746). Jan 14 06:37:53.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.41.14:22-64.225.73.213:59746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:53.579841 sshd[3980]: Invalid user postgres from 64.225.73.213 port 59746 Jan 14 06:37:53.677000 audit[3980]: USER_ERR pid=3980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:37:53.679389 sshd[3980]: Connection closed by invalid user postgres 64.225.73.213 port 59746 [preauth] Jan 14 06:37:53.682931 systemd[1]: sshd@12-10.230.41.14:22-64.225.73.213:59746.service: Deactivated successfully. Jan 14 06:37:53.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.41.14:22-64.225.73.213:59746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:37:59.959636 kubelet[2966]: I0114 06:37:59.959484 2966 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:38:00.126000 audit[3990]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:00.138507 kernel: kauditd_printk_skb: 9 callbacks suppressed Jan 14 06:38:00.138768 kernel: audit: type=1325 audit(1768372680.126:583): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:00.126000 audit[3990]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffde029260 a2=0 a3=7fffde02924c items=0 ppid=3073 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:00.148306 kernel: audit: type=1300 audit(1768372680.126:583): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffde029260 a2=0 a3=7fffde02924c items=0 ppid=3073 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:00.148420 kernel: audit: type=1327 audit(1768372680.126:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:00.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:00.143000 audit[3990]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:00.152316 kernel: audit: type=1325 audit(1768372680.143:584): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3990 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:00.143000 audit[3990]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffde029260 a2=0 a3=7fffde02924c items=0 ppid=3073 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:00.156065 kernel: audit: type=1300 audit(1768372680.143:584): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffde029260 a2=0 a3=7fffde02924c items=0 ppid=3073 pid=3990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:00.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:00.164299 kernel: audit: type=1327 audit(1768372680.143:584): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:01.667059 containerd[1642]: time="2026-01-14T06:38:01.666925269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:01.864202 containerd[1642]: time="2026-01-14T06:38:01.863916717Z" level=error msg="Failed to destroy network for sandbox \"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:01.868013 containerd[1642]: time="2026-01-14T06:38:01.867869465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:01.869524 systemd[1]: run-netns-cni\x2d30d6881e\x2dd628\x2d3bee\x2db5b6\x2dc8f06282ec38.mount: Deactivated successfully. Jan 14 06:38:01.874380 kubelet[2966]: E0114 06:38:01.869528 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:01.874380 kubelet[2966]: E0114 06:38:01.869654 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:38:01.874380 kubelet[2966]: E0114 06:38:01.869702 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" Jan 14 06:38:01.875043 kubelet[2966]: E0114 06:38:01.869795 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f3b3257966e4af35a5c3b5ab1f9c6be389fc64e01f69fd856eecfd790a33b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:02.665489 containerd[1642]: time="2026-01-14T06:38:02.665422344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,}" Jan 14 06:38:02.794314 containerd[1642]: time="2026-01-14T06:38:02.793442897Z" level=error msg="Failed to destroy network for sandbox \"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:02.797762 systemd[1]: run-netns-cni\x2dc2126a4d\x2d04fd\x2da51c\x2d1c8c\x2dc7c38072f46b.mount: Deactivated successfully. Jan 14 06:38:02.800140 containerd[1642]: time="2026-01-14T06:38:02.800005229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:02.801188 kubelet[2966]: E0114 06:38:02.801096 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:02.801536 kubelet[2966]: E0114 06:38:02.801213 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:38:02.801536 kubelet[2966]: E0114 06:38:02.801248 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l5sdz" Jan 14 06:38:02.801536 kubelet[2966]: E0114 06:38:02.801334 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l5sdz_kube-system(f55e8add-9675-49f7-8240-772692184a74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l5sdz_kube-system(f55e8add-9675-49f7-8240-772692184a74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66de33af92b46d236076ed93ca524e1297052b40955dfa4482dba9a931755d93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l5sdz" podUID="f55e8add-9675-49f7-8240-772692184a74" Jan 14 06:38:03.665679 containerd[1642]: time="2026-01-14T06:38:03.665317536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75ff6db5b4-gqqh4,Uid:1d15520d-da28-4e66-86ea-0828797c7224,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:03.683715 containerd[1642]: time="2026-01-14T06:38:03.681243034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:38:03.879852 containerd[1642]: time="2026-01-14T06:38:03.879778026Z" level=error msg="Failed to destroy network for sandbox \"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.884636 systemd[1]: run-netns-cni\x2d1e87f862\x2db931\x2d6ff4\x2d529c\x2d940a6d3db02c.mount: Deactivated successfully. Jan 14 06:38:03.888205 containerd[1642]: time="2026-01-14T06:38:03.888133323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75ff6db5b4-gqqh4,Uid:1d15520d-da28-4e66-86ea-0828797c7224,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.889641 kubelet[2966]: E0114 06:38:03.888833 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.889641 kubelet[2966]: E0114 06:38:03.888925 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:38:03.889641 kubelet[2966]: E0114 06:38:03.888959 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75ff6db5b4-gqqh4" Jan 14 06:38:03.891245 kubelet[2966]: E0114 06:38:03.889033 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75ff6db5b4-gqqh4_calico-system(1d15520d-da28-4e66-86ea-0828797c7224)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75ff6db5b4-gqqh4_calico-system(1d15520d-da28-4e66-86ea-0828797c7224)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eca275d372e4093958698ed31b12a3565cb5685d03d7dc624bb4777798a186b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75ff6db5b4-gqqh4" podUID="1d15520d-da28-4e66-86ea-0828797c7224" Jan 14 06:38:03.930225 containerd[1642]: time="2026-01-14T06:38:03.929683909Z" level=error msg="Failed to destroy network for sandbox \"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.937707 systemd[1]: run-netns-cni\x2df57b68a6\x2d4fa3\x2da7b0\x2d3d76\x2de33a35968b7e.mount: Deactivated successfully. Jan 14 06:38:03.939693 containerd[1642]: time="2026-01-14T06:38:03.939582892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.941254 kubelet[2966]: E0114 06:38:03.941167 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:03.942182 kubelet[2966]: E0114 06:38:03.941543 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:38:03.942182 kubelet[2966]: E0114 06:38:03.941586 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" Jan 14 06:38:03.942182 kubelet[2966]: E0114 06:38:03.941653 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db9e9524eedf9293c11e55e7f01ab4ee1204391123ceb2fc84676b53dcdeec5a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:04.665682 containerd[1642]: time="2026-01-14T06:38:04.665477648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:04.674660 containerd[1642]: time="2026-01-14T06:38:04.673477947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,}" Jan 14 06:38:04.715259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4090712187.mount: Deactivated successfully. Jan 14 06:38:04.789038 containerd[1642]: time="2026-01-14T06:38:04.788967878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:38:04.796117 containerd[1642]: time="2026-01-14T06:38:04.796025064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 06:38:04.822387 containerd[1642]: time="2026-01-14T06:38:04.821929866Z" level=error msg="Failed to destroy network for sandbox \"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.824907 containerd[1642]: time="2026-01-14T06:38:04.824762959Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:38:04.826404 containerd[1642]: time="2026-01-14T06:38:04.826362417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.826963 kubelet[2966]: E0114 06:38:04.826837 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.827055 kubelet[2966]: E0114 06:38:04.827005 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:38:04.827055 kubelet[2966]: E0114 06:38:04.827040 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qw9xh" Jan 14 06:38:04.827198 kubelet[2966]: E0114 06:38:04.827118 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qw9xh_kube-system(856b7d39-73f5-4a90-838f-5cffdb6afeaf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qw9xh_kube-system(856b7d39-73f5-4a90-838f-5cffdb6afeaf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb1a0fcd065c5d9c991defb4b95cad27f8e758bb9cc34d0b6d615ff08e22091f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qw9xh" podUID="856b7d39-73f5-4a90-838f-5cffdb6afeaf" Jan 14 06:38:04.829646 containerd[1642]: time="2026-01-14T06:38:04.829613353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:38:04.831563 containerd[1642]: time="2026-01-14T06:38:04.831026384Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.864614551s" Jan 14 06:38:04.831563 containerd[1642]: time="2026-01-14T06:38:04.831082927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 06:38:04.854136 containerd[1642]: time="2026-01-14T06:38:04.854068903Z" level=error msg="Failed to destroy network for sandbox \"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.857632 containerd[1642]: time="2026-01-14T06:38:04.857520002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.858131 kubelet[2966]: E0114 06:38:04.857986 2966 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:38:04.858370 kubelet[2966]: E0114 06:38:04.858339 2966 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q82z" Jan 14 06:38:04.858951 kubelet[2966]: E0114 06:38:04.858579 2966 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8q82z" Jan 14 06:38:04.858951 kubelet[2966]: E0114 06:38:04.858692 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c4d770f6fc17ad7480ce53aaa1be1251bf88fb1b35a9d020df711ceca5e38a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:38:04.871598 containerd[1642]: time="2026-01-14T06:38:04.870837915Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 06:38:04.884681 systemd[1]: run-netns-cni\x2d1ee8f3fc\x2dc374\x2d4e04\x2d1589\x2d9adf2863df47.mount: Deactivated successfully. Jan 14 06:38:04.885332 systemd[1]: run-netns-cni\x2d8312dd13\x2d660e\x2de03a\x2d31c6\x2d60ab2f800e75.mount: Deactivated successfully. Jan 14 06:38:04.934422 containerd[1642]: time="2026-01-14T06:38:04.929625727Z" level=info msg="Container 058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:38:04.934217 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885989857.mount: Deactivated successfully. Jan 14 06:38:04.952743 containerd[1642]: time="2026-01-14T06:38:04.952681231Z" level=info msg="CreateContainer within sandbox \"73cfa2f5bab4878c0b4b77e6c334edb06ad6364fe9208c7a361144f00b8d3d6d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e\"" Jan 14 06:38:04.954909 containerd[1642]: time="2026-01-14T06:38:04.954047457Z" level=info msg="StartContainer for \"058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e\"" Jan 14 06:38:04.956758 containerd[1642]: time="2026-01-14T06:38:04.956726355Z" level=info msg="connecting to shim 058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e" address="unix:///run/containerd/s/3cf2694c95b84c7fafa246060c9313c44a32403f10245d2b8d9cc520439367c3" protocol=ttrpc version=3 Jan 14 06:38:05.138623 systemd[1]: Started cri-containerd-058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e.scope - libcontainer container 058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e. Jan 14 06:38:05.234000 audit: BPF prog-id=176 op=LOAD Jan 14 06:38:05.234000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.247407 kernel: audit: type=1334 audit(1768372685.234:585): prog-id=176 op=LOAD Jan 14 06:38:05.247543 kernel: audit: type=1300 audit(1768372685.234:585): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00020c488 a2=98 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.252566 kernel: audit: type=1327 audit(1768372685.234:585): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.234000 audit: BPF prog-id=177 op=LOAD Jan 14 06:38:05.256739 kernel: audit: type=1334 audit(1768372685.234:586): prog-id=177 op=LOAD Jan 14 06:38:05.234000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.259392 kernel: audit: type=1300 audit(1768372685.234:586): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00020c218 a2=98 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.264640 kernel: audit: type=1327 audit(1768372685.234:586): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.234000 audit: BPF prog-id=177 op=UNLOAD Jan 14 06:38:05.268730 kernel: audit: type=1334 audit(1768372685.234:587): prog-id=177 op=UNLOAD Jan 14 06:38:05.234000 audit[4153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.271726 kernel: audit: type=1300 audit(1768372685.234:587): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.282357 kernel: audit: type=1327 audit(1768372685.234:587): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.235000 audit: BPF prog-id=176 op=UNLOAD Jan 14 06:38:05.235000 audit[4153]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.235000 audit: BPF prog-id=178 op=LOAD Jan 14 06:38:05.235000 audit[4153]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00020c6e8 a2=98 a3=0 items=0 ppid=3478 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:05.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035386637623138393331346338366435393936356637393763613934 Jan 14 06:38:05.287306 kernel: audit: type=1334 audit(1768372685.235:588): prog-id=176 op=UNLOAD Jan 14 06:38:05.319170 containerd[1642]: time="2026-01-14T06:38:05.319101134Z" level=info msg="StartContainer for \"058f7b189314c86d59965f797ca9458f7870292e5dc75a83ce70368b16b4d65e\" returns successfully" Jan 14 06:38:05.733931 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 06:38:05.735193 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 06:38:06.267247 kubelet[2966]: I0114 06:38:06.262150 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-smm6n" podStartSLOduration=2.998557985 podStartE2EDuration="32.262025344s" podCreationTimestamp="2026-01-14 06:37:34 +0000 UTC" firstStartedPulling="2026-01-14 06:37:35.570184963 +0000 UTC m=+26.175224044" lastFinishedPulling="2026-01-14 06:38:04.833652323 +0000 UTC m=+55.438691403" observedRunningTime="2026-01-14 06:38:06.260649857 +0000 UTC m=+56.865688945" watchObservedRunningTime="2026-01-14 06:38:06.262025344 +0000 UTC m=+56.867064438" Jan 14 06:38:06.286301 kubelet[2966]: I0114 06:38:06.285625 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle\") pod \"1d15520d-da28-4e66-86ea-0828797c7224\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " Jan 14 06:38:06.286592 kubelet[2966]: I0114 06:38:06.286567 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwc9\" (UniqueName: \"kubernetes.io/projected/1d15520d-da28-4e66-86ea-0828797c7224-kube-api-access-djwc9\") pod \"1d15520d-da28-4e66-86ea-0828797c7224\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " Jan 14 06:38:06.286717 kubelet[2966]: I0114 06:38:06.286695 2966 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair\") pod \"1d15520d-da28-4e66-86ea-0828797c7224\" (UID: \"1d15520d-da28-4e66-86ea-0828797c7224\") " Jan 14 06:38:06.288576 kubelet[2966]: I0114 06:38:06.288521 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1d15520d-da28-4e66-86ea-0828797c7224" (UID: "1d15520d-da28-4e66-86ea-0828797c7224"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 06:38:06.321378 systemd[1]: var-lib-kubelet-pods-1d15520d\x2dda28\x2d4e66\x2d86ea\x2d0828797c7224-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 06:38:06.328288 systemd[1]: var-lib-kubelet-pods-1d15520d\x2dda28\x2d4e66\x2d86ea\x2d0828797c7224-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjwc9.mount: Deactivated successfully. Jan 14 06:38:06.331591 kubelet[2966]: I0114 06:38:06.331528 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d15520d-da28-4e66-86ea-0828797c7224-kube-api-access-djwc9" (OuterVolumeSpecName: "kube-api-access-djwc9") pod "1d15520d-da28-4e66-86ea-0828797c7224" (UID: "1d15520d-da28-4e66-86ea-0828797c7224"). InnerVolumeSpecName "kube-api-access-djwc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 06:38:06.337343 kubelet[2966]: I0114 06:38:06.337142 2966 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1d15520d-da28-4e66-86ea-0828797c7224" (UID: "1d15520d-da28-4e66-86ea-0828797c7224"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 06:38:06.387740 kubelet[2966]: I0114 06:38:06.387660 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d15520d-da28-4e66-86ea-0828797c7224-whisker-ca-bundle\") on node \"srv-2u6n8.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:38:06.387740 kubelet[2966]: I0114 06:38:06.387729 2966 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1d15520d-da28-4e66-86ea-0828797c7224-whisker-backend-key-pair\") on node \"srv-2u6n8.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:38:06.387740 kubelet[2966]: I0114 06:38:06.387749 2966 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-djwc9\" (UniqueName: \"kubernetes.io/projected/1d15520d-da28-4e66-86ea-0828797c7224-kube-api-access-djwc9\") on node \"srv-2u6n8.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:38:06.668606 containerd[1642]: time="2026-01-14T06:38:06.667643965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kv7ql,Uid:217872a4-2508-46c6-a68b-d9c0e654e8b7,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:07.060714 systemd[1]: Removed slice kubepods-besteffort-pod1d15520d_da28_4e66_86ea_0828797c7224.slice - libcontainer container kubepods-besteffort-pod1d15520d_da28_4e66_86ea_0828797c7224.slice. Jan 14 06:38:07.199334 systemd-networkd[1552]: cali5b87c4fb392: Link UP Jan 14 06:38:07.199760 systemd-networkd[1552]: cali5b87c4fb392: Gained carrier Jan 14 06:38:07.281975 containerd[1642]: 2026-01-14 06:38:06.769 [INFO][4233] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:38:07.281975 containerd[1642]: 2026-01-14 06:38:06.822 [INFO][4233] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0 goldmane-666569f655- calico-system 217872a4-2508-46c6-a68b-d9c0e654e8b7 860 0 2026-01-14 06:37:32 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com goldmane-666569f655-kv7ql eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b87c4fb392 [] [] }} ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-" Jan 14 06:38:07.281975 containerd[1642]: 2026-01-14 06:38:06.822 [INFO][4233] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.281975 containerd[1642]: 2026-01-14 06:38:07.036 [INFO][4244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" HandleID="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Workload="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.041 [INFO][4244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" HandleID="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Workload="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003603a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"goldmane-666569f655-kv7ql", "timestamp":"2026-01-14 06:38:07.036069446 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.042 [INFO][4244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.045 [INFO][4244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.046 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.080 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.104 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.128 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.133 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282360 containerd[1642]: 2026-01-14 06:38:07.138 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.138 [INFO][4244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.143 [INFO][4244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173 Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.149 [INFO][4244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.167 [INFO][4244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.129/26] block=192.168.1.128/26 handle="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.168 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.129/26] handle="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.168 [INFO][4244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:07.282964 containerd[1642]: 2026-01-14 06:38:07.168 [INFO][4244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.129/26] IPv6=[] ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" HandleID="k8s-pod-network.415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Workload="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.284716 containerd[1642]: 2026-01-14 06:38:07.173 [INFO][4233] cni-plugin/k8s.go 418: Populated endpoint ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"217872a4-2508-46c6-a68b-d9c0e654e8b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-kv7ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b87c4fb392", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:07.284833 containerd[1642]: 2026-01-14 06:38:07.174 [INFO][4233] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.129/32] ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.284833 containerd[1642]: 2026-01-14 06:38:07.174 [INFO][4233] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b87c4fb392 ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.284833 containerd[1642]: 2026-01-14 06:38:07.204 [INFO][4233] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.284994 containerd[1642]: 2026-01-14 06:38:07.206 [INFO][4233] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"217872a4-2508-46c6-a68b-d9c0e654e8b7", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173", Pod:"goldmane-666569f655-kv7ql", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.1.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b87c4fb392", MAC:"f2:4d:64:3e:ad:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:07.285092 containerd[1642]: 2026-01-14 06:38:07.275 [INFO][4233] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" Namespace="calico-system" Pod="goldmane-666569f655-kv7ql" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-goldmane--666569f655--kv7ql-eth0" Jan 14 06:38:07.290839 systemd[1]: Created slice kubepods-besteffort-podc4442e43_b3e6_4c81_9228_c5c0cde9a530.slice - libcontainer container kubepods-besteffort-podc4442e43_b3e6_4c81_9228_c5c0cde9a530.slice. Jan 14 06:38:07.400034 kubelet[2966]: I0114 06:38:07.398631 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4442e43-b3e6-4c81-9228-c5c0cde9a530-whisker-ca-bundle\") pod \"whisker-7959c45994-p8pd7\" (UID: \"c4442e43-b3e6-4c81-9228-c5c0cde9a530\") " pod="calico-system/whisker-7959c45994-p8pd7" Jan 14 06:38:07.400034 kubelet[2966]: I0114 06:38:07.398700 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8fl6\" (UniqueName: \"kubernetes.io/projected/c4442e43-b3e6-4c81-9228-c5c0cde9a530-kube-api-access-s8fl6\") pod \"whisker-7959c45994-p8pd7\" (UID: \"c4442e43-b3e6-4c81-9228-c5c0cde9a530\") " pod="calico-system/whisker-7959c45994-p8pd7" Jan 14 06:38:07.400034 kubelet[2966]: I0114 06:38:07.398756 2966 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c4442e43-b3e6-4c81-9228-c5c0cde9a530-whisker-backend-key-pair\") pod \"whisker-7959c45994-p8pd7\" (UID: \"c4442e43-b3e6-4c81-9228-c5c0cde9a530\") " pod="calico-system/whisker-7959c45994-p8pd7" Jan 14 06:38:07.551319 containerd[1642]: time="2026-01-14T06:38:07.550379565Z" level=info msg="connecting to shim 415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173" address="unix:///run/containerd/s/b5ecbf474adb0f53049c90faa77af068bffb0b17ccd8dec169fb4faf350bc3d7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:07.590539 systemd[1]: Started cri-containerd-415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173.scope - libcontainer container 415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173. Jan 14 06:38:07.599832 containerd[1642]: time="2026-01-14T06:38:07.599332604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7959c45994-p8pd7,Uid:c4442e43-b3e6-4c81-9228-c5c0cde9a530,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:07.622000 audit: BPF prog-id=179 op=LOAD Jan 14 06:38:07.623000 audit: BPF prog-id=180 op=LOAD Jan 14 06:38:07.623000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.623000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=180 op=UNLOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=181 op=LOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=182 op=LOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=182 op=UNLOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=181 op=UNLOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.624000 audit: BPF prog-id=183 op=LOAD Jan 14 06:38:07.624000 audit[4316]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=4306 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:07.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431353831326361656337323764336161653961646238386534393861 Jan 14 06:38:07.668786 containerd[1642]: time="2026-01-14T06:38:07.668589818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-ps4c2,Uid:d6749f8c-3427-433c-a8c4-8f87f70b4d79,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:38:07.677295 kubelet[2966]: I0114 06:38:07.677219 2966 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d15520d-da28-4e66-86ea-0828797c7224" path="/var/lib/kubelet/pods/1d15520d-da28-4e66-86ea-0828797c7224/volumes" Jan 14 06:38:07.757705 containerd[1642]: time="2026-01-14T06:38:07.756687371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kv7ql,Uid:217872a4-2508-46c6-a68b-d9c0e654e8b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"415812caec727d3aae9adb88e498a41274538fc7bf4de1212730ca30c6f07173\"" Jan 14 06:38:07.765385 containerd[1642]: time="2026-01-14T06:38:07.765332933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:38:07.892713 systemd-networkd[1552]: cali48fd1793434: Link UP Jan 14 06:38:07.893657 systemd-networkd[1552]: cali48fd1793434: Gained carrier Jan 14 06:38:07.922265 containerd[1642]: 2026-01-14 06:38:07.654 [INFO][4336] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:38:07.922265 containerd[1642]: 2026-01-14 06:38:07.685 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0 whisker-7959c45994- calico-system c4442e43-b3e6-4c81-9228-c5c0cde9a530 961 0 2026-01-14 06:38:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7959c45994 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com whisker-7959c45994-p8pd7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali48fd1793434 [] [] }} ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-" Jan 14 06:38:07.922265 containerd[1642]: 2026-01-14 06:38:07.685 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.922265 containerd[1642]: 2026-01-14 06:38:07.825 [INFO][4358] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" HandleID="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Workload="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.826 [INFO][4358] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" HandleID="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Workload="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000295120), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"whisker-7959c45994-p8pd7", "timestamp":"2026-01-14 06:38:07.825915779 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.826 [INFO][4358] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.826 [INFO][4358] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.826 [INFO][4358] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.837 [INFO][4358] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.844 [INFO][4358] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.851 [INFO][4358] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.855 [INFO][4358] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.922883 containerd[1642]: 2026-01-14 06:38:07.859 [INFO][4358] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.859 [INFO][4358] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.862 [INFO][4358] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.871 [INFO][4358] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4358] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.130/26] block=192.168.1.128/26 handle="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4358] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.130/26] handle="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4358] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:07.923695 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4358] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.130/26] IPv6=[] ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" HandleID="k8s-pod-network.ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Workload="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.925301 containerd[1642]: 2026-01-14 06:38:07.886 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0", GenerateName:"whisker-7959c45994-", Namespace:"calico-system", SelfLink:"", UID:"c4442e43-b3e6-4c81-9228-c5c0cde9a530", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7959c45994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7959c45994-p8pd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali48fd1793434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:07.925301 containerd[1642]: 2026-01-14 06:38:07.886 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.130/32] ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.926123 containerd[1642]: 2026-01-14 06:38:07.886 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48fd1793434 ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.926123 containerd[1642]: 2026-01-14 06:38:07.893 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.926240 containerd[1642]: 2026-01-14 06:38:07.894 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0", GenerateName:"whisker-7959c45994-", Namespace:"calico-system", SelfLink:"", UID:"c4442e43-b3e6-4c81-9228-c5c0cde9a530", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7959c45994", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e", Pod:"whisker-7959c45994-p8pd7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.1.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali48fd1793434", MAC:"a6:f0:61:c3:53:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:07.926447 containerd[1642]: 2026-01-14 06:38:07.916 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" Namespace="calico-system" Pod="whisker-7959c45994-p8pd7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-whisker--7959c45994--p8pd7-eth0" Jan 14 06:38:07.976820 containerd[1642]: time="2026-01-14T06:38:07.976586835Z" level=info msg="connecting to shim ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e" address="unix:///run/containerd/s/6a4b622b086a035f2035c266134ce177b2f3cab7e5f9ee218554c133e9cd6197" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:08.018639 systemd-networkd[1552]: calie8221ca561f: Link UP Jan 14 06:38:08.019267 systemd-networkd[1552]: calie8221ca561f: Gained carrier Jan 14 06:38:08.066955 containerd[1642]: 2026-01-14 06:38:07.772 [INFO][4347] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:38:08.066955 containerd[1642]: 2026-01-14 06:38:07.796 [INFO][4347] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0 calico-apiserver-7bddfbd4b9- calico-apiserver d6749f8c-3427-433c-a8c4-8f87f70b4d79 848 0 2026-01-14 06:37:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bddfbd4b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com calico-apiserver-7bddfbd4b9-ps4c2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie8221ca561f [] [] }} ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-" Jan 14 06:38:08.066955 containerd[1642]: 2026-01-14 06:38:07.796 [INFO][4347] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.066955 containerd[1642]: 2026-01-14 06:38:07.868 [INFO][4372] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" HandleID="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.869 [INFO][4372] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" HandleID="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003039a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"calico-apiserver-7bddfbd4b9-ps4c2", "timestamp":"2026-01-14 06:38:07.868828349 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.869 [INFO][4372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.879 [INFO][4372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.939 [INFO][4372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.951 [INFO][4372] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.963 [INFO][4372] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.967 [INFO][4372] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.067693 containerd[1642]: 2026-01-14 06:38:07.974 [INFO][4372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:07.974 [INFO][4372] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:07.978 [INFO][4372] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084 Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:07.987 [INFO][4372] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:08.000 [INFO][4372] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.131/26] block=192.168.1.128/26 handle="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:08.002 [INFO][4372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.131/26] handle="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:08.002 [INFO][4372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:08.069143 containerd[1642]: 2026-01-14 06:38:08.002 [INFO][4372] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.131/26] IPv6=[] ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" HandleID="k8s-pod-network.03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.070117 containerd[1642]: 2026-01-14 06:38:08.008 [INFO][4347] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0", GenerateName:"calico-apiserver-7bddfbd4b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6749f8c-3427-433c-a8c4-8f87f70b4d79", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bddfbd4b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7bddfbd4b9-ps4c2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8221ca561f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:08.070343 containerd[1642]: 2026-01-14 06:38:08.010 [INFO][4347] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.131/32] ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.070343 containerd[1642]: 2026-01-14 06:38:08.010 [INFO][4347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8221ca561f ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.070343 containerd[1642]: 2026-01-14 06:38:08.021 [INFO][4347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.070879 containerd[1642]: 2026-01-14 06:38:08.022 [INFO][4347] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0", GenerateName:"calico-apiserver-7bddfbd4b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6749f8c-3427-433c-a8c4-8f87f70b4d79", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bddfbd4b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084", Pod:"calico-apiserver-7bddfbd4b9-ps4c2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8221ca561f", MAC:"42:0a:f0:63:63:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:08.071131 containerd[1642]: 2026-01-14 06:38:08.046 [INFO][4347] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-ps4c2" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--ps4c2-eth0" Jan 14 06:38:08.102378 containerd[1642]: time="2026-01-14T06:38:08.100794489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:08.107599 systemd[1]: Started cri-containerd-ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e.scope - libcontainer container ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e. Jan 14 06:38:08.114179 containerd[1642]: time="2026-01-14T06:38:08.114100533Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:38:08.116327 containerd[1642]: time="2026-01-14T06:38:08.115922083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:08.116862 kubelet[2966]: E0114 06:38:08.116799 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:08.116972 kubelet[2966]: E0114 06:38:08.116890 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:08.143313 kubelet[2966]: E0114 06:38:08.142872 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27lgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:08.144702 kubelet[2966]: E0114 06:38:08.144659 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:08.194000 audit: BPF prog-id=184 op=LOAD Jan 14 06:38:08.196000 audit: BPF prog-id=185 op=LOAD Jan 14 06:38:08.196000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e238 a2=98 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.196000 audit: BPF prog-id=185 op=UNLOAD Jan 14 06:38:08.196000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.196000 audit: BPF prog-id=186 op=LOAD Jan 14 06:38:08.196000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e488 a2=98 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.196000 audit: BPF prog-id=187 op=LOAD Jan 14 06:38:08.196000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00016e218 a2=98 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.197000 audit: BPF prog-id=187 op=UNLOAD Jan 14 06:38:08.197000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.197000 audit: BPF prog-id=186 op=UNLOAD Jan 14 06:38:08.197000 audit[4442]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.197000 audit: BPF prog-id=188 op=LOAD Jan 14 06:38:08.197000 audit[4442]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00016e6e8 a2=98 a3=0 items=0 ppid=4421 pid=4442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.197000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6563626262356533666438326364323735333164633039336164663634 Jan 14 06:38:08.233112 containerd[1642]: time="2026-01-14T06:38:08.233042093Z" level=info msg="connecting to shim 03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084" address="unix:///run/containerd/s/65140a467ce4b65ba10d30aae5867ad797fb456f887254754774a6155ee46438" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:08.349723 systemd[1]: Started cri-containerd-03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084.scope - libcontainer container 03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084. Jan 14 06:38:08.404198 containerd[1642]: time="2026-01-14T06:38:08.404132390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7959c45994-p8pd7,Uid:c4442e43-b3e6-4c81-9228-c5c0cde9a530,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecbbb5e3fd82cd27531dc093adf64b37402a166523803f402b85e24e886a946e\"" Jan 14 06:38:08.420753 containerd[1642]: time="2026-01-14T06:38:08.420549921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:38:08.431485 systemd-networkd[1552]: cali5b87c4fb392: Gained IPv6LL Jan 14 06:38:08.437000 audit: BPF prog-id=189 op=LOAD Jan 14 06:38:08.439000 audit: BPF prog-id=190 op=LOAD Jan 14 06:38:08.439000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=190 op=UNLOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=191 op=LOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=192 op=LOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=192 op=UNLOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=191 op=UNLOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.440000 audit: BPF prog-id=193 op=LOAD Jan 14 06:38:08.440000 audit[4559]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4535 pid=4559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:08.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033656466386661353664643665393931316166356364363665393662 Jan 14 06:38:08.566805 containerd[1642]: time="2026-01-14T06:38:08.566478494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-ps4c2,Uid:d6749f8c-3427-433c-a8c4-8f87f70b4d79,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"03edf8fa56dd6e9911af5cd66e96be3369dd4023ea2276c513a49f689a286084\"" Jan 14 06:38:08.750564 containerd[1642]: time="2026-01-14T06:38:08.749987026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:08.753677 containerd[1642]: time="2026-01-14T06:38:08.753366836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:38:08.753910 containerd[1642]: time="2026-01-14T06:38:08.753550947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:08.754338 kubelet[2966]: E0114 06:38:08.754186 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:08.754338 kubelet[2966]: E0114 06:38:08.754255 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:08.756957 containerd[1642]: time="2026-01-14T06:38:08.755605946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:38:08.764169 kubelet[2966]: E0114 06:38:08.764075 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57f62300c7e6467f8d6b6e8d7514b501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:09.054000 audit: BPF prog-id=194 op=LOAD Jan 14 06:38:09.054000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff34840150 a2=98 a3=1fffffffffffffff items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.054000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.054000 audit: BPF prog-id=194 op=UNLOAD Jan 14 06:38:09.054000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff34840120 a3=0 items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.054000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.055000 audit: BPF prog-id=195 op=LOAD Jan 14 06:38:09.055000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff34840030 a2=94 a3=3 items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.055000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.055000 audit: BPF prog-id=195 op=UNLOAD Jan 14 06:38:09.055000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff34840030 a2=94 a3=3 items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.055000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.055000 audit: BPF prog-id=196 op=LOAD Jan 14 06:38:09.055000 audit[4638]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff34840070 a2=94 a3=7fff34840250 items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.055000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.055000 audit: BPF prog-id=196 op=UNLOAD Jan 14 06:38:09.055000 audit[4638]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff34840070 a2=94 a3=7fff34840250 items=0 ppid=4439 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.055000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:38:09.060225 containerd[1642]: time="2026-01-14T06:38:09.059958830Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:09.060000 audit: BPF prog-id=197 op=LOAD Jan 14 06:38:09.060000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd14c60df0 a2=98 a3=3 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.060000 audit: BPF prog-id=197 op=UNLOAD Jan 14 06:38:09.060000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffd14c60dc0 a3=0 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.060000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.061000 audit: BPF prog-id=198 op=LOAD Jan 14 06:38:09.061000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd14c60be0 a2=94 a3=54428f items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.061000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.061000 audit: BPF prog-id=198 op=UNLOAD Jan 14 06:38:09.061000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd14c60be0 a2=94 a3=54428f items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.061000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.061000 audit: BPF prog-id=199 op=LOAD Jan 14 06:38:09.061000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd14c60c10 a2=94 a3=2 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.061000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.061000 audit: BPF prog-id=199 op=UNLOAD Jan 14 06:38:09.061000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd14c60c10 a2=0 a3=2 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.061000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.067739 kubelet[2966]: E0114 06:38:09.062773 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:09.067739 kubelet[2966]: E0114 06:38:09.062842 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:09.067739 kubelet[2966]: E0114 06:38:09.063248 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:09.068094 containerd[1642]: time="2026-01-14T06:38:09.062189681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:09.068094 containerd[1642]: time="2026-01-14T06:38:09.062265114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:38:09.068094 containerd[1642]: time="2026-01-14T06:38:09.063910196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:38:09.076558 kubelet[2966]: E0114 06:38:09.076315 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:09.111488 kubelet[2966]: E0114 06:38:09.111313 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:09.127668 kubelet[2966]: E0114 06:38:09.102243 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:09.212000 audit[4643]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:09.212000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffe07d899d0 a2=0 a3=7ffe07d899bc items=0 ppid=3073 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:09.217000 audit[4643]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4643 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:09.217000 audit[4643]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffe07d899d0 a2=0 a3=0 items=0 ppid=3073 pid=4643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.217000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:09.234000 audit[4645]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:09.234000 audit[4645]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdc262f480 a2=0 a3=7ffdc262f46c items=0 ppid=3073 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.234000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:09.237000 audit[4645]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4645 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:09.237000 audit[4645]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdc262f480 a2=0 a3=0 items=0 ppid=3073 pid=4645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:09.262559 systemd-networkd[1552]: calie8221ca561f: Gained IPv6LL Jan 14 06:38:09.326529 systemd-networkd[1552]: cali48fd1793434: Gained IPv6LL Jan 14 06:38:09.374732 containerd[1642]: time="2026-01-14T06:38:09.374081488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:09.376347 containerd[1642]: time="2026-01-14T06:38:09.375985174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:38:09.376347 containerd[1642]: time="2026-01-14T06:38:09.376080434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:09.378046 kubelet[2966]: E0114 06:38:09.377460 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:09.378046 kubelet[2966]: E0114 06:38:09.377543 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:09.378046 kubelet[2966]: E0114 06:38:09.377709 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:09.383610 kubelet[2966]: E0114 06:38:09.383543 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:38:09.424000 audit: BPF prog-id=200 op=LOAD Jan 14 06:38:09.424000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd14c60ad0 a2=94 a3=1 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.424000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.424000 audit: BPF prog-id=200 op=UNLOAD Jan 14 06:38:09.424000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffd14c60ad0 a2=94 a3=1 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.424000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.443000 audit: BPF prog-id=201 op=LOAD Jan 14 06:38:09.443000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd14c60ac0 a2=94 a3=4 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.443000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.444000 audit: BPF prog-id=201 op=UNLOAD Jan 14 06:38:09.444000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd14c60ac0 a2=0 a3=4 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.444000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.444000 audit: BPF prog-id=202 op=LOAD Jan 14 06:38:09.444000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd14c60920 a2=94 a3=5 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.444000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.444000 audit: BPF prog-id=202 op=UNLOAD Jan 14 06:38:09.444000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffd14c60920 a2=0 a3=5 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.444000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.445000 audit: BPF prog-id=203 op=LOAD Jan 14 06:38:09.445000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd14c60b40 a2=94 a3=6 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.445000 audit: BPF prog-id=203 op=UNLOAD Jan 14 06:38:09.445000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffd14c60b40 a2=0 a3=6 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.445000 audit: BPF prog-id=204 op=LOAD Jan 14 06:38:09.445000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffd14c602f0 a2=94 a3=88 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.445000 audit: BPF prog-id=205 op=LOAD Jan 14 06:38:09.445000 audit[4640]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffd14c60170 a2=94 a3=2 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.445000 audit: BPF prog-id=205 op=UNLOAD Jan 14 06:38:09.445000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffd14c601a0 a2=0 a3=7ffd14c602a0 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.445000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.446000 audit: BPF prog-id=204 op=UNLOAD Jan 14 06:38:09.446000 audit[4640]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=27312d10 a2=0 a3=44ceee7673d0a4d5 items=0 ppid=4439 pid=4640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.446000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:38:09.496000 audit: BPF prog-id=206 op=LOAD Jan 14 06:38:09.496000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4957fbe0 a2=98 a3=1999999999999999 items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.496000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.496000 audit: BPF prog-id=206 op=UNLOAD Jan 14 06:38:09.496000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc4957fbb0 a3=0 items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.496000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.496000 audit: BPF prog-id=207 op=LOAD Jan 14 06:38:09.496000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4957fac0 a2=94 a3=ffff items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.496000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.497000 audit: BPF prog-id=207 op=UNLOAD Jan 14 06:38:09.497000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4957fac0 a2=94 a3=ffff items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.497000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.497000 audit: BPF prog-id=208 op=LOAD Jan 14 06:38:09.497000 audit[4648]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc4957fb00 a2=94 a3=7ffc4957fce0 items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.497000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.497000 audit: BPF prog-id=208 op=UNLOAD Jan 14 06:38:09.497000 audit[4648]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc4957fb00 a2=94 a3=7ffc4957fce0 items=0 ppid=4439 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.497000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:38:09.679663 systemd-networkd[1552]: vxlan.calico: Link UP Jan 14 06:38:09.679676 systemd-networkd[1552]: vxlan.calico: Gained carrier Jan 14 06:38:09.759000 audit: BPF prog-id=209 op=LOAD Jan 14 06:38:09.759000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdad9f8b50 a2=98 a3=20 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.759000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.764000 audit: BPF prog-id=209 op=UNLOAD Jan 14 06:38:09.764000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffdad9f8b20 a3=0 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.764000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.772000 audit: BPF prog-id=210 op=LOAD Jan 14 06:38:09.772000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdad9f8960 a2=94 a3=54428f items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.772000 audit: BPF prog-id=210 op=UNLOAD Jan 14 06:38:09.772000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdad9f8960 a2=94 a3=54428f items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.772000 audit: BPF prog-id=211 op=LOAD Jan 14 06:38:09.772000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdad9f8990 a2=94 a3=2 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.772000 audit: BPF prog-id=211 op=UNLOAD Jan 14 06:38:09.772000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffdad9f8990 a2=0 a3=2 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.773000 audit: BPF prog-id=212 op=LOAD Jan 14 06:38:09.773000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdad9f8740 a2=94 a3=4 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.773000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.773000 audit: BPF prog-id=212 op=UNLOAD Jan 14 06:38:09.773000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdad9f8740 a2=94 a3=4 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.773000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.773000 audit: BPF prog-id=213 op=LOAD Jan 14 06:38:09.773000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdad9f8840 a2=94 a3=7ffdad9f89c0 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.773000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.773000 audit: BPF prog-id=213 op=UNLOAD Jan 14 06:38:09.773000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdad9f8840 a2=0 a3=7ffdad9f89c0 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.773000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.774000 audit: BPF prog-id=214 op=LOAD Jan 14 06:38:09.774000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdad9f7f70 a2=94 a3=2 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.774000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.774000 audit: BPF prog-id=214 op=UNLOAD Jan 14 06:38:09.774000 audit[4675]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffdad9f7f70 a2=0 a3=2 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.774000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.775000 audit: BPF prog-id=215 op=LOAD Jan 14 06:38:09.775000 audit[4675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffdad9f8070 a2=94 a3=30 items=0 ppid=4439 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.775000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:38:09.799000 audit: BPF prog-id=216 op=LOAD Jan 14 06:38:09.799000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffebe54d0d0 a2=98 a3=0 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:09.799000 audit: BPF prog-id=216 op=UNLOAD Jan 14 06:38:09.799000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffebe54d0a0 a3=0 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:09.799000 audit: BPF prog-id=217 op=LOAD Jan 14 06:38:09.799000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebe54cec0 a2=94 a3=54428f items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:09.799000 audit: BPF prog-id=217 op=UNLOAD Jan 14 06:38:09.799000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebe54cec0 a2=94 a3=54428f items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.799000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:09.800000 audit: BPF prog-id=218 op=LOAD Jan 14 06:38:09.800000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebe54cef0 a2=94 a3=2 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:09.800000 audit: BPF prog-id=218 op=UNLOAD Jan 14 06:38:09.800000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebe54cef0 a2=0 a3=2 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:09.800000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.107760 kubelet[2966]: E0114 06:38:10.107214 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:10.112089 kubelet[2966]: E0114 06:38:10.111759 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:38:10.163000 audit: BPF prog-id=219 op=LOAD Jan 14 06:38:10.163000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffebe54cdb0 a2=94 a3=1 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.163000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.164000 audit: BPF prog-id=219 op=UNLOAD Jan 14 06:38:10.164000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffebe54cdb0 a2=94 a3=1 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.164000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.181000 audit: BPF prog-id=220 op=LOAD Jan 14 06:38:10.181000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebe54cda0 a2=94 a3=4 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.181000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.181000 audit: BPF prog-id=220 op=UNLOAD Jan 14 06:38:10.181000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffebe54cda0 a2=0 a3=4 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.181000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.182000 audit: BPF prog-id=221 op=LOAD Jan 14 06:38:10.182000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffebe54cc00 a2=94 a3=5 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.182000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.182000 audit: BPF prog-id=221 op=UNLOAD Jan 14 06:38:10.182000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffebe54cc00 a2=0 a3=5 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.182000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.182000 audit: BPF prog-id=222 op=LOAD Jan 14 06:38:10.182000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebe54ce20 a2=94 a3=6 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.182000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.182000 audit: BPF prog-id=222 op=UNLOAD Jan 14 06:38:10.182000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffebe54ce20 a2=0 a3=6 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.182000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.182000 audit: BPF prog-id=223 op=LOAD Jan 14 06:38:10.182000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffebe54c5d0 a2=94 a3=88 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.182000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.183000 audit: BPF prog-id=224 op=LOAD Jan 14 06:38:10.183000 audit[4688]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffebe54c450 a2=94 a3=2 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.183000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.183000 audit: BPF prog-id=224 op=UNLOAD Jan 14 06:38:10.183000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffebe54c480 a2=0 a3=7ffebe54c580 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.183000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.183000 audit: BPF prog-id=223 op=UNLOAD Jan 14 06:38:10.183000 audit[4688]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3812bd10 a2=0 a3=debe1f790e948730 items=0 ppid=4439 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.183000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:38:10.194000 audit: BPF prog-id=215 op=UNLOAD Jan 14 06:38:10.194000 audit[4439]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0006f6040 a2=0 a3=0 items=0 ppid=4390 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.194000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 06:38:10.282000 audit[4709]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:10.297585 kernel: kauditd_printk_skb: 269 callbacks suppressed Jan 14 06:38:10.297676 kernel: audit: type=1325 audit(1768372690.282:680): table=filter:125 family=2 entries=20 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:10.282000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce3326e20 a2=0 a3=7ffce3326e0c items=0 ppid=3073 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.306732 kernel: audit: type=1300 audit(1768372690.282:680): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffce3326e20 a2=0 a3=7ffce3326e0c items=0 ppid=3073 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.282000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:10.312000 audit[4709]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:10.316306 kernel: audit: type=1327 audit(1768372690.282:680): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:10.316365 kernel: audit: type=1325 audit(1768372690.312:681): table=nat:126 family=2 entries=14 op=nft_register_rule pid=4709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:10.312000 audit[4709]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce3326e20 a2=0 a3=0 items=0 ppid=3073 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.320112 kernel: audit: type=1300 audit(1768372690.312:681): arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffce3326e20 a2=0 a3=0 items=0 ppid=3073 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.312000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:10.325162 kernel: audit: type=1327 audit(1768372690.312:681): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:10.400000 audit[4718]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.408816 kernel: audit: type=1325 audit(1768372690.400:682): table=raw:127 family=2 entries=21 op=nft_register_chain pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.408909 kernel: audit: type=1300 audit(1768372690.400:682): arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc3e2ad260 a2=0 a3=7ffc3e2ad24c items=0 ppid=4439 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.400000 audit[4718]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffc3e2ad260 a2=0 a3=7ffc3e2ad24c items=0 ppid=4439 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.400000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:10.419298 kernel: audit: type=1327 audit(1768372690.400:682): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:10.413000 audit[4722]: NETFILTER_CFG table=nat:128 family=2 entries=15 op=nft_register_chain pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.424296 kernel: audit: type=1325 audit(1768372690.413:683): table=nat:128 family=2 entries=15 op=nft_register_chain pid=4722 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.413000 audit[4722]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff0ba965e0 a2=0 a3=7fff0ba965cc items=0 ppid=4439 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.413000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:10.425000 audit[4720]: NETFILTER_CFG table=mangle:129 family=2 entries=16 op=nft_register_chain pid=4720 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.425000 audit[4720]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe1e8634d0 a2=0 a3=7ffe1e8634bc items=0 ppid=4439 pid=4720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.425000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:10.428000 audit[4719]: NETFILTER_CFG table=filter:130 family=2 entries=172 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:10.428000 audit[4719]: SYSCALL arch=c000003e syscall=46 success=yes exit=100704 a0=3 a1=7ffeca38e480 a2=0 a3=55d99e4c7000 items=0 ppid=4439 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:10.428000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:11.502583 systemd-networkd[1552]: vxlan.calico: Gained IPv6LL Jan 14 06:38:15.667052 containerd[1642]: time="2026-01-14T06:38:15.666698013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:15.668246 containerd[1642]: time="2026-01-14T06:38:15.668161595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,}" Jan 14 06:38:15.913548 systemd-networkd[1552]: cali23d1669f6c2: Link UP Jan 14 06:38:15.916351 systemd-networkd[1552]: cali23d1669f6c2: Gained carrier Jan 14 06:38:15.952606 containerd[1642]: 2026-01-14 06:38:15.756 [INFO][4745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0 calico-kube-controllers-745df5bdfc- calico-system fcd4f250-b1e6-467c-90cd-24e53dcbe8e8 852 0 2026-01-14 06:37:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:745df5bdfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com calico-kube-controllers-745df5bdfc-85fpc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali23d1669f6c2 [] [] }} ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-" Jan 14 06:38:15.952606 containerd[1642]: 2026-01-14 06:38:15.758 [INFO][4745] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.952606 containerd[1642]: 2026-01-14 06:38:15.833 [INFO][4769] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" HandleID="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.834 [INFO][4769] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" HandleID="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032b9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"calico-kube-controllers-745df5bdfc-85fpc", "timestamp":"2026-01-14 06:38:15.833045032 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.834 [INFO][4769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.834 [INFO][4769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.834 [INFO][4769] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.854 [INFO][4769] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.862 [INFO][4769] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.871 [INFO][4769] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.874 [INFO][4769] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953006 containerd[1642]: 2026-01-14 06:38:15.879 [INFO][4769] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.879 [INFO][4769] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.882 [INFO][4769] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.890 [INFO][4769] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.899 [INFO][4769] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.132/26] block=192.168.1.128/26 handle="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.900 [INFO][4769] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.132/26] handle="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.901 [INFO][4769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:15.953988 containerd[1642]: 2026-01-14 06:38:15.901 [INFO][4769] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.132/26] IPv6=[] ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" HandleID="k8s-pod-network.c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.954728 containerd[1642]: 2026-01-14 06:38:15.905 [INFO][4745] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0", GenerateName:"calico-kube-controllers-745df5bdfc-", Namespace:"calico-system", SelfLink:"", UID:"fcd4f250-b1e6-467c-90cd-24e53dcbe8e8", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745df5bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-745df5bdfc-85fpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23d1669f6c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:15.954872 containerd[1642]: 2026-01-14 06:38:15.905 [INFO][4745] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.132/32] ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.954872 containerd[1642]: 2026-01-14 06:38:15.905 [INFO][4745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali23d1669f6c2 ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.954872 containerd[1642]: 2026-01-14 06:38:15.918 [INFO][4745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:15.955046 containerd[1642]: 2026-01-14 06:38:15.922 [INFO][4745] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0", GenerateName:"calico-kube-controllers-745df5bdfc-", Namespace:"calico-system", SelfLink:"", UID:"fcd4f250-b1e6-467c-90cd-24e53dcbe8e8", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"745df5bdfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b", Pod:"calico-kube-controllers-745df5bdfc-85fpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.1.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali23d1669f6c2", MAC:"06:ae:6c:5a:81:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:15.955161 containerd[1642]: 2026-01-14 06:38:15.940 [INFO][4745] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" Namespace="calico-system" Pod="calico-kube-controllers-745df5bdfc-85fpc" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--kube--controllers--745df5bdfc--85fpc-eth0" Jan 14 06:38:16.013000 audit[4794]: NETFILTER_CFG table=filter:131 family=2 entries=44 op=nft_register_chain pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:16.023986 kernel: kauditd_printk_skb: 8 callbacks suppressed Jan 14 06:38:16.024070 kernel: audit: type=1325 audit(1768372696.013:686): table=filter:131 family=2 entries=44 op=nft_register_chain pid=4794 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:16.024115 kernel: audit: type=1300 audit(1768372696.013:686): arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd996527f0 a2=0 a3=7ffd996527dc items=0 ppid=4439 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.013000 audit[4794]: SYSCALL arch=c000003e syscall=46 success=yes exit=21952 a0=3 a1=7ffd996527f0 a2=0 a3=7ffd996527dc items=0 ppid=4439 pid=4794 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.013000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:16.036113 kernel: audit: type=1327 audit(1768372696.013:686): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:16.045180 containerd[1642]: time="2026-01-14T06:38:16.044778252Z" level=info msg="connecting to shim c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b" address="unix:///run/containerd/s/b2fb170798733b01ec1271c5217cf3b8f2c5c405a01bcdfcc1767c89c0a5bedc" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:16.067311 systemd-networkd[1552]: cali18473f469e1: Link UP Jan 14 06:38:16.072181 systemd-networkd[1552]: cali18473f469e1: Gained carrier Jan 14 06:38:16.116897 systemd[1]: Started cri-containerd-c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b.scope - libcontainer container c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b. Jan 14 06:38:16.122039 containerd[1642]: 2026-01-14 06:38:15.773 [INFO][4747] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0 coredns-668d6bf9bc- kube-system f55e8add-9675-49f7-8240-772692184a74 845 0 2026-01-14 06:37:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com coredns-668d6bf9bc-l5sdz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18473f469e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-" Jan 14 06:38:16.122039 containerd[1642]: 2026-01-14 06:38:15.773 [INFO][4747] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.122039 containerd[1642]: 2026-01-14 06:38:15.845 [INFO][4774] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" HandleID="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.845 [INFO][4774] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" HandleID="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd5a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-l5sdz", "timestamp":"2026-01-14 06:38:15.84504488 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.845 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.901 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.901 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.954 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.966 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.981 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:15.990 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.122362 containerd[1642]: 2026-01-14 06:38:16.000 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.000 [INFO][4774] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.005 [INFO][4774] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2 Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.022 [INFO][4774] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.045 [INFO][4774] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.133/26] block=192.168.1.128/26 handle="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.047 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.133/26] handle="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.047 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:16.125245 containerd[1642]: 2026-01-14 06:38:16.047 [INFO][4774] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.133/26] IPv6=[] ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" HandleID="k8s-pod-network.e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.055 [INFO][4747] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f55e8add-9675-49f7-8240-772692184a74", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-l5sdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18473f469e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.056 [INFO][4747] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.133/32] ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.056 [INFO][4747] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18473f469e1 ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.075 [INFO][4747] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.076 [INFO][4747] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f55e8add-9675-49f7-8240-772692184a74", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2", Pod:"coredns-668d6bf9bc-l5sdz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18473f469e1", MAC:"52:4b:a1:9a:97:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:16.128013 containerd[1642]: 2026-01-14 06:38:16.113 [INFO][4747] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-l5sdz" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--l5sdz-eth0" Jan 14 06:38:16.182913 kernel: audit: type=1334 audit(1768372696.178:687): prog-id=225 op=LOAD Jan 14 06:38:16.183064 kernel: audit: type=1334 audit(1768372696.179:688): prog-id=226 op=LOAD Jan 14 06:38:16.178000 audit: BPF prog-id=225 op=LOAD Jan 14 06:38:16.179000 audit: BPF prog-id=226 op=LOAD Jan 14 06:38:16.179000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.190307 kernel: audit: type=1300 audit(1768372696.179:688): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.191561 containerd[1642]: time="2026-01-14T06:38:16.190680155Z" level=info msg="connecting to shim e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2" address="unix:///run/containerd/s/7dd77d75a3a400473ca36000cf8cfb3174167a06e600676fa00be405cf8bfe8a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:16.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.203349 kernel: audit: type=1327 audit(1768372696.179:688): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.179000 audit: BPF prog-id=226 op=UNLOAD Jan 14 06:38:16.205303 kernel: audit: type=1334 audit(1768372696.179:689): prog-id=226 op=UNLOAD Jan 14 06:38:16.179000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.213405 kernel: audit: type=1300 audit(1768372696.179:689): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.221319 kernel: audit: type=1327 audit(1768372696.179:689): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.180000 audit: BPF prog-id=227 op=LOAD Jan 14 06:38:16.180000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.180000 audit: BPF prog-id=228 op=LOAD Jan 14 06:38:16.180000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.180000 audit: BPF prog-id=228 op=UNLOAD Jan 14 06:38:16.180000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.180000 audit: BPF prog-id=227 op=UNLOAD Jan 14 06:38:16.180000 audit[4816]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.181000 audit: BPF prog-id=229 op=LOAD Jan 14 06:38:16.181000 audit[4816]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=4805 pid=4816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339623035626332613930653461303632353236393664383865333236 Jan 14 06:38:16.209000 audit[4865]: NETFILTER_CFG table=filter:132 family=2 entries=54 op=nft_register_chain pid=4865 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:16.209000 audit[4865]: SYSCALL arch=c000003e syscall=46 success=yes exit=26116 a0=3 a1=7ffc936fbd50 a2=0 a3=7ffc936fbd3c items=0 ppid=4439 pid=4865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.209000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:16.256576 systemd[1]: Started cri-containerd-e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2.scope - libcontainer container e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2. Jan 14 06:38:16.285000 audit: BPF prog-id=230 op=LOAD Jan 14 06:38:16.288000 audit: BPF prog-id=231 op=LOAD Jan 14 06:38:16.288000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.288000 audit: BPF prog-id=231 op=UNLOAD Jan 14 06:38:16.288000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.288000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.289000 audit: BPF prog-id=232 op=LOAD Jan 14 06:38:16.289000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.289000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.290000 audit: BPF prog-id=233 op=LOAD Jan 14 06:38:16.290000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.290000 audit: BPF prog-id=233 op=UNLOAD Jan 14 06:38:16.290000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.290000 audit: BPF prog-id=232 op=UNLOAD Jan 14 06:38:16.290000 audit[4868]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.290000 audit: BPF prog-id=234 op=LOAD Jan 14 06:38:16.290000 audit[4868]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4855 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6535323463303865663865633765616363373664616532616637633438 Jan 14 06:38:16.297130 containerd[1642]: time="2026-01-14T06:38:16.297059795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-745df5bdfc-85fpc,Uid:fcd4f250-b1e6-467c-90cd-24e53dcbe8e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c9b05bc2a90e4a06252696d88e3261f018296af25d6a52ff271425d83bb3d95b\"" Jan 14 06:38:16.302652 containerd[1642]: time="2026-01-14T06:38:16.302613947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:38:16.356060 containerd[1642]: time="2026-01-14T06:38:16.355904014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l5sdz,Uid:f55e8add-9675-49f7-8240-772692184a74,Namespace:kube-system,Attempt:0,} returns sandbox id \"e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2\"" Jan 14 06:38:16.362308 containerd[1642]: time="2026-01-14T06:38:16.362185359Z" level=info msg="CreateContainer within sandbox \"e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:38:16.376147 containerd[1642]: time="2026-01-14T06:38:16.376099459Z" level=info msg="Container bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:38:16.383539 containerd[1642]: time="2026-01-14T06:38:16.383498101Z" level=info msg="CreateContainer within sandbox \"e524c08ef8ec7eacc76dae2af7c48569da057821776805b30dd13c24f40a68c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7\"" Jan 14 06:38:16.388940 containerd[1642]: time="2026-01-14T06:38:16.388802352Z" level=info msg="StartContainer for \"bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7\"" Jan 14 06:38:16.392209 containerd[1642]: time="2026-01-14T06:38:16.392108758Z" level=info msg="connecting to shim bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7" address="unix:///run/containerd/s/7dd77d75a3a400473ca36000cf8cfb3174167a06e600676fa00be405cf8bfe8a" protocol=ttrpc version=3 Jan 14 06:38:16.424665 systemd[1]: Started cri-containerd-bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7.scope - libcontainer container bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7. Jan 14 06:38:16.450000 audit: BPF prog-id=235 op=LOAD Jan 14 06:38:16.450000 audit: BPF prog-id=236 op=LOAD Jan 14 06:38:16.450000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.450000 audit: BPF prog-id=236 op=UNLOAD Jan 14 06:38:16.450000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.452000 audit: BPF prog-id=237 op=LOAD Jan 14 06:38:16.452000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.452000 audit: BPF prog-id=238 op=LOAD Jan 14 06:38:16.452000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.452000 audit: BPF prog-id=238 op=UNLOAD Jan 14 06:38:16.452000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.452000 audit: BPF prog-id=237 op=UNLOAD Jan 14 06:38:16.452000 audit[4902]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.452000 audit: BPF prog-id=239 op=LOAD Jan 14 06:38:16.452000 audit[4902]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4855 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:16.452000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262396664343364663933346162326534623063656435316464656665 Jan 14 06:38:16.493071 containerd[1642]: time="2026-01-14T06:38:16.492959758Z" level=info msg="StartContainer for \"bb9fd43df934ab2e4b0ced51ddefe3115067e31ce3c2a004fba7c1b681f430f7\" returns successfully" Jan 14 06:38:16.626319 containerd[1642]: time="2026-01-14T06:38:16.626218093Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:16.628701 containerd[1642]: time="2026-01-14T06:38:16.628529992Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:38:16.629045 containerd[1642]: time="2026-01-14T06:38:16.628571253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:16.629681 kubelet[2966]: E0114 06:38:16.629452 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:16.629681 kubelet[2966]: E0114 06:38:16.629532 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:16.631845 kubelet[2966]: E0114 06:38:16.631526 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79tgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:16.633151 kubelet[2966]: E0114 06:38:16.633108 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:17.142171 kubelet[2966]: E0114 06:38:17.141991 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:17.196882 kubelet[2966]: I0114 06:38:17.196782 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l5sdz" podStartSLOduration=63.193904963 podStartE2EDuration="1m3.193904963s" podCreationTimestamp="2026-01-14 06:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:38:17.193095858 +0000 UTC m=+67.798134981" watchObservedRunningTime="2026-01-14 06:38:17.193904963 +0000 UTC m=+67.798944059" Jan 14 06:38:17.229000 audit[4936]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:17.229000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff9073bf80 a2=0 a3=7fff9073bf6c items=0 ppid=3073 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:17.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:17.239000 audit[4936]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=4936 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:17.239000 audit[4936]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff9073bf80 a2=0 a3=7fff9073bf6c items=0 ppid=3073 pid=4936 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:17.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:17.454500 systemd-networkd[1552]: cali23d1669f6c2: Gained IPv6LL Jan 14 06:38:17.670935 containerd[1642]: time="2026-01-14T06:38:17.670581921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,}" Jan 14 06:38:17.865217 systemd-networkd[1552]: cali4becc73eada: Link UP Jan 14 06:38:17.866201 systemd-networkd[1552]: cali4becc73eada: Gained carrier Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.747 [INFO][4938] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0 coredns-668d6bf9bc- kube-system 856b7d39-73f5-4a90-838f-5cffdb6afeaf 855 0 2026-01-14 06:37:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com coredns-668d6bf9bc-qw9xh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4becc73eada [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.747 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.795 [INFO][4949] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" HandleID="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.795 [INFO][4949] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" HandleID="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-qw9xh", "timestamp":"2026-01-14 06:38:17.795015069 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.795 [INFO][4949] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.795 [INFO][4949] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.795 [INFO][4949] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.808 [INFO][4949] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.816 [INFO][4949] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.824 [INFO][4949] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.827 [INFO][4949] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.830 [INFO][4949] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.830 [INFO][4949] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.833 [INFO][4949] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5 Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.843 [INFO][4949] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.854 [INFO][4949] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.134/26] block=192.168.1.128/26 handle="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.854 [INFO][4949] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.134/26] handle="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.854 [INFO][4949] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:17.892327 containerd[1642]: 2026-01-14 06:38:17.854 [INFO][4949] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.134/26] IPv6=[] ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" HandleID="k8s-pod-network.d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Workload="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.859 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"856b7d39-73f5-4a90-838f-5cffdb6afeaf", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-qw9xh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4becc73eada", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.859 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.134/32] ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.860 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4becc73eada ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.866 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.867 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"856b7d39-73f5-4a90-838f-5cffdb6afeaf", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5", Pod:"coredns-668d6bf9bc-qw9xh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.1.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4becc73eada", MAC:"ce:1d:2e:3c:14:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:17.894031 containerd[1642]: 2026-01-14 06:38:17.885 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" Namespace="kube-system" Pod="coredns-668d6bf9bc-qw9xh" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qw9xh-eth0" Jan 14 06:38:17.903169 systemd-networkd[1552]: cali18473f469e1: Gained IPv6LL Jan 14 06:38:17.930994 containerd[1642]: time="2026-01-14T06:38:17.930930488Z" level=info msg="connecting to shim d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5" address="unix:///run/containerd/s/fe0926810a7f28a8702e145d18c4c92140160dd0a0d9bfabca1c2eb3be49e814" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:17.956000 audit[4983]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=4983 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:17.956000 audit[4983]: SYSCALL arch=c000003e syscall=46 success=yes exit=25572 a0=3 a1=7ffe357121c0 a2=0 a3=7ffe357121ac items=0 ppid=4439 pid=4983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:17.956000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:17.981540 systemd[1]: Started cri-containerd-d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5.scope - libcontainer container d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5. Jan 14 06:38:18.000000 audit: BPF prog-id=240 op=LOAD Jan 14 06:38:18.001000 audit: BPF prog-id=241 op=LOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.001000 audit: BPF prog-id=241 op=UNLOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.001000 audit: BPF prog-id=242 op=LOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.001000 audit: BPF prog-id=243 op=LOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.001000 audit: BPF prog-id=243 op=UNLOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.001000 audit: BPF prog-id=242 op=UNLOAD Jan 14 06:38:18.001000 audit[4986]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.002000 audit: BPF prog-id=244 op=LOAD Jan 14 06:38:18.002000 audit[4986]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=4973 pid=4986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430323566663236346562366564323833613935646135633539346333 Jan 14 06:38:18.064120 containerd[1642]: time="2026-01-14T06:38:18.064033827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qw9xh,Uid:856b7d39-73f5-4a90-838f-5cffdb6afeaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5\"" Jan 14 06:38:18.073224 containerd[1642]: time="2026-01-14T06:38:18.072319696Z" level=info msg="CreateContainer within sandbox \"d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:38:18.088464 containerd[1642]: time="2026-01-14T06:38:18.088411009Z" level=info msg="Container 06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:38:18.096582 containerd[1642]: time="2026-01-14T06:38:18.096502428Z" level=info msg="CreateContainer within sandbox \"d025ff264eb6ed283a95da5c594c380705d1356078799c4620fece6486a4ccc5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2\"" Jan 14 06:38:18.097904 containerd[1642]: time="2026-01-14T06:38:18.097555389Z" level=info msg="StartContainer for \"06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2\"" Jan 14 06:38:18.099215 containerd[1642]: time="2026-01-14T06:38:18.099070387Z" level=info msg="connecting to shim 06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2" address="unix:///run/containerd/s/fe0926810a7f28a8702e145d18c4c92140160dd0a0d9bfabca1c2eb3be49e814" protocol=ttrpc version=3 Jan 14 06:38:18.136531 systemd[1]: Started cri-containerd-06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2.scope - libcontainer container 06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2. Jan 14 06:38:18.156535 kubelet[2966]: E0114 06:38:18.156446 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:18.174000 audit: BPF prog-id=245 op=LOAD Jan 14 06:38:18.177000 audit: BPF prog-id=246 op=LOAD Jan 14 06:38:18.177000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.177000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.178000 audit: BPF prog-id=246 op=UNLOAD Jan 14 06:38:18.178000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.178000 audit: BPF prog-id=247 op=LOAD Jan 14 06:38:18.178000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.178000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.180000 audit: BPF prog-id=248 op=LOAD Jan 14 06:38:18.180000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.182000 audit: BPF prog-id=248 op=UNLOAD Jan 14 06:38:18.182000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.182000 audit: BPF prog-id=247 op=UNLOAD Jan 14 06:38:18.182000 audit[5012]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.182000 audit: BPF prog-id=249 op=LOAD Jan 14 06:38:18.182000 audit[5012]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=4973 pid=5012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:18.182000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036353832393239313131636663323834666538616262623330326535 Jan 14 06:38:18.231052 containerd[1642]: time="2026-01-14T06:38:18.230997954Z" level=info msg="StartContainer for \"06582929111cfc284fe8abbb302e555e25bac6f8c1492865e54c9ca30faf1ba2\" returns successfully" Jan 14 06:38:18.665193 containerd[1642]: time="2026-01-14T06:38:18.665097084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:38:18.666180 containerd[1642]: time="2026-01-14T06:38:18.666106873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,}" Jan 14 06:38:18.990522 systemd-networkd[1552]: cali4becc73eada: Gained IPv6LL Jan 14 06:38:19.038214 systemd-networkd[1552]: cali63301bdaf41: Link UP Jan 14 06:38:19.038546 systemd-networkd[1552]: cali63301bdaf41: Gained carrier Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.849 [INFO][5044] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0 calico-apiserver-7bddfbd4b9- calico-apiserver a8d8745b-48bb-4a89-9b0b-07086983dbe4 857 0 2026-01-14 06:37:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bddfbd4b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com calico-apiserver-7bddfbd4b9-rswq7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali63301bdaf41 [] [] }} ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.850 [INFO][5044] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.952 [INFO][5070] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" HandleID="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.952 [INFO][5070] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" HandleID="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032cd60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"calico-apiserver-7bddfbd4b9-rswq7", "timestamp":"2026-01-14 06:38:18.9521666 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.952 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.952 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.952 [INFO][5070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.965 [INFO][5070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.973 [INFO][5070] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.985 [INFO][5070] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.990 [INFO][5070] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.998 [INFO][5070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:18.998 [INFO][5070] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.002 [INFO][5070] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.013 [INFO][5070] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.027 [INFO][5070] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.135/26] block=192.168.1.128/26 handle="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.027 [INFO][5070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.135/26] handle="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.027 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:19.065740 containerd[1642]: 2026-01-14 06:38:19.027 [INFO][5070] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.135/26] IPv6=[] ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" HandleID="k8s-pod-network.0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Workload="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.032 [INFO][5044] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0", GenerateName:"calico-apiserver-7bddfbd4b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8d8745b-48bb-4a89-9b0b-07086983dbe4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bddfbd4b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-7bddfbd4b9-rswq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63301bdaf41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.032 [INFO][5044] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.135/32] ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.032 [INFO][5044] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali63301bdaf41 ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.037 [INFO][5044] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.040 [INFO][5044] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0", GenerateName:"calico-apiserver-7bddfbd4b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8d8745b-48bb-4a89-9b0b-07086983dbe4", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bddfbd4b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a", Pod:"calico-apiserver-7bddfbd4b9-rswq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.1.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali63301bdaf41", MAC:"de:46:c4:f8:3d:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:19.073061 containerd[1642]: 2026-01-14 06:38:19.058 [INFO][5044] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" Namespace="calico-apiserver" Pod="calico-apiserver-7bddfbd4b9-rswq7" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-calico--apiserver--7bddfbd4b9--rswq7-eth0" Jan 14 06:38:19.126000 audit[5093]: NETFILTER_CFG table=filter:136 family=2 entries=53 op=nft_register_chain pid=5093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:19.126000 audit[5093]: SYSCALL arch=c000003e syscall=46 success=yes exit=26624 a0=3 a1=7fff7e52c0e0 a2=0 a3=7fff7e52c0cc items=0 ppid=4439 pid=5093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.126000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:19.141175 containerd[1642]: time="2026-01-14T06:38:19.140930112Z" level=info msg="connecting to shim 0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a" address="unix:///run/containerd/s/4362e6b02a6a4046dd90630058ad639ad53e0c6332ba074be08ee7973479fa0d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:19.192026 systemd-networkd[1552]: cali7e04411a3fe: Link UP Jan 14 06:38:19.193516 systemd-networkd[1552]: cali7e04411a3fe: Gained carrier Jan 14 06:38:19.224040 kubelet[2966]: I0114 06:38:19.223628 2966 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qw9xh" podStartSLOduration=65.223580384 podStartE2EDuration="1m5.223580384s" podCreationTimestamp="2026-01-14 06:37:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:38:19.219010577 +0000 UTC m=+69.824049700" watchObservedRunningTime="2026-01-14 06:38:19.223580384 +0000 UTC m=+69.828619472" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:18.881 [INFO][5048] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0 csi-node-driver- calico-system a91d79c6-e300-47ea-a44e-e654a57c8864 742 0 2026-01-14 06:37:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-2u6n8.gb1.brightbox.com csi-node-driver-8q82z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7e04411a3fe [] [] }} ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:18.882 [INFO][5048] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:18.965 [INFO][5075] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" HandleID="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Workload="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:18.966 [INFO][5075] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" HandleID="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Workload="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-2u6n8.gb1.brightbox.com", "pod":"csi-node-driver-8q82z", "timestamp":"2026-01-14 06:38:18.965925427 +0000 UTC"}, Hostname:"srv-2u6n8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:18.966 [INFO][5075] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.027 [INFO][5075] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.028 [INFO][5075] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-2u6n8.gb1.brightbox.com' Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.071 [INFO][5075] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.084 [INFO][5075] ipam/ipam.go 394: Looking up existing affinities for host host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.094 [INFO][5075] ipam/ipam.go 511: Trying affinity for 192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.103 [INFO][5075] ipam/ipam.go 158: Attempting to load block cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.111 [INFO][5075] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.1.128/26 host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.111 [INFO][5075] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.1.128/26 handle="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.115 [INFO][5075] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4 Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.132 [INFO][5075] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.1.128/26 handle="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.150 [INFO][5075] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.1.136/26] block=192.168.1.128/26 handle="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.150 [INFO][5075] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.1.136/26] handle="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" host="srv-2u6n8.gb1.brightbox.com" Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.150 [INFO][5075] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:38:19.233300 containerd[1642]: 2026-01-14 06:38:19.150 [INFO][5075] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.1.136/26] IPv6=[] ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" HandleID="k8s-pod-network.f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Workload="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.176 [INFO][5048] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a91d79c6-e300-47ea-a44e-e654a57c8864", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-8q82z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e04411a3fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.177 [INFO][5048] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.1.136/32] ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.178 [INFO][5048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e04411a3fe ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.190 [INFO][5048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.195 [INFO][5048] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a91d79c6-e300-47ea-a44e-e654a57c8864", ResourceVersion:"742", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 37, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-2u6n8.gb1.brightbox.com", ContainerID:"f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4", Pod:"csi-node-driver-8q82z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.1.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7e04411a3fe", MAC:"f6:91:cc:37:b0:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:38:19.234261 containerd[1642]: 2026-01-14 06:38:19.222 [INFO][5048] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" Namespace="calico-system" Pod="csi-node-driver-8q82z" WorkloadEndpoint="srv--2u6n8.gb1.brightbox.com-k8s-csi--node--driver--8q82z-eth0" Jan 14 06:38:19.241552 systemd[1]: Started cri-containerd-0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a.scope - libcontainer container 0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a. Jan 14 06:38:19.315000 audit[5135]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:19.315000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe200075d0 a2=0 a3=7ffe200075bc items=0 ppid=3073 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:19.320836 containerd[1642]: time="2026-01-14T06:38:19.320783070Z" level=info msg="connecting to shim f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4" address="unix:///run/containerd/s/7957eebb4cab24b83ecd4bc08881cb7d9ed60ec55b15c3cc941df5aaf43fd588" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:38:19.324000 audit[5135]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=5135 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:19.324000 audit[5135]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe200075d0 a2=0 a3=7ffe200075bc items=0 ppid=3073 pid=5135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:19.359000 audit: BPF prog-id=250 op=LOAD Jan 14 06:38:19.361000 audit: BPF prog-id=251 op=LOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=251 op=UNLOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=252 op=LOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=253 op=LOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=253 op=UNLOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=252 op=UNLOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.361000 audit: BPF prog-id=254 op=LOAD Jan 14 06:38:19.361000 audit[5114]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5103 pid=5114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3066613231646463363765666338613864656432383337646266373662 Jan 14 06:38:19.380513 systemd[1]: Started cri-containerd-f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4.scope - libcontainer container f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4. Jan 14 06:38:19.397000 audit[5179]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:19.397000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd824f2fa0 a2=0 a3=7ffd824f2f8c items=0 ppid=3073 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:19.409000 audit[5184]: NETFILTER_CFG table=filter:140 family=2 entries=56 op=nft_register_chain pid=5184 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:38:19.409000 audit[5184]: SYSCALL arch=c000003e syscall=46 success=yes exit=25500 a0=3 a1=7fff22f2f910 a2=0 a3=7fff22f2f8fc items=0 ppid=4439 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.409000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:38:19.416000 audit[5179]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=5179 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:19.416000 audit[5179]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffd824f2fa0 a2=0 a3=7ffd824f2f8c items=0 ppid=3073 pid=5179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:19.428000 audit: BPF prog-id=255 op=LOAD Jan 14 06:38:19.435000 audit: BPF prog-id=256 op=LOAD Jan 14 06:38:19.435000 audit[5162]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.435000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.436000 audit: BPF prog-id=256 op=UNLOAD Jan 14 06:38:19.436000 audit[5162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.439000 audit: BPF prog-id=257 op=LOAD Jan 14 06:38:19.439000 audit[5162]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.439000 audit: BPF prog-id=258 op=LOAD Jan 14 06:38:19.439000 audit[5162]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.440000 audit: BPF prog-id=258 op=UNLOAD Jan 14 06:38:19.440000 audit[5162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.440000 audit: BPF prog-id=257 op=UNLOAD Jan 14 06:38:19.440000 audit[5162]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.440000 audit: BPF prog-id=259 op=LOAD Jan 14 06:38:19.440000 audit[5162]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=5149 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:19.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630393430386465316165633334383135363532346135633031336563 Jan 14 06:38:19.475231 containerd[1642]: time="2026-01-14T06:38:19.475149471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bddfbd4b9-rswq7,Uid:a8d8745b-48bb-4a89-9b0b-07086983dbe4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0fa21ddc67efc8a8ded2837dbf76b7170a6f52fb0d658b235d555d469325c12a\"" Jan 14 06:38:19.478356 containerd[1642]: time="2026-01-14T06:38:19.478324736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:38:19.486232 containerd[1642]: time="2026-01-14T06:38:19.486164120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8q82z,Uid:a91d79c6-e300-47ea-a44e-e654a57c8864,Namespace:calico-system,Attempt:0,} returns sandbox id \"f09408de1aec348156524a5c013ec293dee953d0f8393a86ec3f40fabb2fccd4\"" Jan 14 06:38:19.818186 containerd[1642]: time="2026-01-14T06:38:19.804176532Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:19.819253 containerd[1642]: time="2026-01-14T06:38:19.819175028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:38:19.819460 containerd[1642]: time="2026-01-14T06:38:19.819404762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:19.820302 kubelet[2966]: E0114 06:38:19.819897 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:19.820302 kubelet[2966]: E0114 06:38:19.819992 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:19.820929 kubelet[2966]: E0114 06:38:19.820403 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h74t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:19.821177 containerd[1642]: time="2026-01-14T06:38:19.820677773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:38:19.821841 kubelet[2966]: E0114 06:38:19.821788 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:20.178601 containerd[1642]: time="2026-01-14T06:38:20.178330342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:20.181047 containerd[1642]: time="2026-01-14T06:38:20.180782642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:38:20.181334 containerd[1642]: time="2026-01-14T06:38:20.180895017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:20.184007 kubelet[2966]: E0114 06:38:20.181620 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:38:20.184007 kubelet[2966]: E0114 06:38:20.181694 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:38:20.184007 kubelet[2966]: E0114 06:38:20.181928 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:20.185180 containerd[1642]: time="2026-01-14T06:38:20.185144295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:38:20.194415 kubelet[2966]: E0114 06:38:20.193697 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:20.445000 audit[5204]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:20.445000 audit[5204]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe7b75c6d0 a2=0 a3=7ffe7b75c6bc items=0 ppid=3073 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:20.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:20.452000 audit[5204]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:38:20.452000 audit[5204]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe7b75c6d0 a2=0 a3=7ffe7b75c6bc items=0 ppid=3073 pid=5204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:20.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:38:20.501823 containerd[1642]: time="2026-01-14T06:38:20.501713588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:20.505233 containerd[1642]: time="2026-01-14T06:38:20.505074705Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:38:20.505509 containerd[1642]: time="2026-01-14T06:38:20.505256642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:20.507869 kubelet[2966]: E0114 06:38:20.505551 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:38:20.507869 kubelet[2966]: E0114 06:38:20.505645 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:38:20.507869 kubelet[2966]: E0114 06:38:20.505828 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:20.507869 kubelet[2966]: E0114 06:38:20.507364 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:38:20.975407 systemd-networkd[1552]: cali63301bdaf41: Gained IPv6LL Jan 14 06:38:21.197479 kubelet[2966]: E0114 06:38:21.197409 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:21.199823 kubelet[2966]: E0114 06:38:21.199610 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:38:21.231416 systemd-networkd[1552]: cali7e04411a3fe: Gained IPv6LL Jan 14 06:38:21.668210 containerd[1642]: time="2026-01-14T06:38:21.667974067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:38:21.977060 containerd[1642]: time="2026-01-14T06:38:21.976806344Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:21.978694 containerd[1642]: time="2026-01-14T06:38:21.978544183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:38:21.978694 containerd[1642]: time="2026-01-14T06:38:21.978617566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:21.979122 kubelet[2966]: E0114 06:38:21.979049 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:21.979842 kubelet[2966]: E0114 06:38:21.979143 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:21.979842 kubelet[2966]: E0114 06:38:21.979412 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27lgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:21.980792 kubelet[2966]: E0114 06:38:21.980552 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:22.666932 containerd[1642]: time="2026-01-14T06:38:22.666801129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:38:22.985132 containerd[1642]: time="2026-01-14T06:38:22.984865842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:22.986639 containerd[1642]: time="2026-01-14T06:38:22.986589369Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:38:22.986762 containerd[1642]: time="2026-01-14T06:38:22.986707414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:22.987003 kubelet[2966]: E0114 06:38:22.986937 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:22.987465 kubelet[2966]: E0114 06:38:22.987032 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:22.987465 kubelet[2966]: E0114 06:38:22.987225 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57f62300c7e6467f8d6b6e8d7514b501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:22.991436 containerd[1642]: time="2026-01-14T06:38:22.991313674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:38:23.294627 containerd[1642]: time="2026-01-14T06:38:23.294162678Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:23.296187 containerd[1642]: time="2026-01-14T06:38:23.296023461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:38:23.296187 containerd[1642]: time="2026-01-14T06:38:23.296059159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:23.296616 kubelet[2966]: E0114 06:38:23.296554 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:23.296727 kubelet[2966]: E0114 06:38:23.296638 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:23.296896 kubelet[2966]: E0114 06:38:23.296818 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:23.299111 kubelet[2966]: E0114 06:38:23.298013 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:38:23.667172 containerd[1642]: time="2026-01-14T06:38:23.666688303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:38:23.975255 containerd[1642]: time="2026-01-14T06:38:23.974972793Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:23.976250 containerd[1642]: time="2026-01-14T06:38:23.976105337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:38:23.976250 containerd[1642]: time="2026-01-14T06:38:23.976161794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:23.976623 kubelet[2966]: E0114 06:38:23.976565 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:23.976778 kubelet[2966]: E0114 06:38:23.976665 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:23.977700 kubelet[2966]: E0114 06:38:23.976959 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:23.978205 kubelet[2966]: E0114 06:38:23.978158 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:26.313380 kernel: kauditd_printk_skb: 183 callbacks suppressed Jan 14 06:38:26.313586 kernel: audit: type=1130 audit(1768372706.308:755): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.41.14:22-64.225.73.213:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:26.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.41.14:22-64.225.73.213:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:26.308956 systemd[1]: Started sshd@13-10.230.41.14:22-64.225.73.213:44126.service - OpenSSH per-connection server daemon (64.225.73.213:44126). Jan 14 06:38:26.460258 sshd[5211]: Invalid user postgres from 64.225.73.213 port 44126 Jan 14 06:38:26.487000 audit[5211]: USER_ERR pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:38:26.488597 sshd[5211]: Connection closed by invalid user postgres 64.225.73.213 port 44126 [preauth] Jan 14 06:38:26.492231 systemd[1]: sshd@13-10.230.41.14:22-64.225.73.213:44126.service: Deactivated successfully. Jan 14 06:38:26.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.41.14:22-64.225.73.213:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:26.494802 kernel: audit: type=1109 audit(1768372706.487:756): pid=5211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:38:26.494864 kernel: audit: type=1131 audit(1768372706.492:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.41.14:22-64.225.73.213:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:32.322279 systemd[1]: Started sshd@14-10.230.41.14:22-20.161.92.111:52336.service - OpenSSH per-connection server daemon (20.161.92.111:52336). Jan 14 06:38:32.336083 kernel: audit: type=1130 audit(1768372712.321:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.41.14:22-20.161.92.111:52336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:32.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.41.14:22-20.161.92.111:52336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:32.672811 containerd[1642]: time="2026-01-14T06:38:32.672140099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:38:32.926000 audit[5230]: USER_ACCT pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.932263 sshd[5230]: Accepted publickey for core from 20.161.92.111 port 52336 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:32.935595 kernel: audit: type=1101 audit(1768372712.926:759): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.942000 audit[5230]: CRED_ACQ pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.946866 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:32.949735 kernel: audit: type=1103 audit(1768372712.942:760): pid=5230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.949971 kernel: audit: type=1006 audit(1768372712.942:761): pid=5230 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 06:38:32.959898 kernel: audit: type=1300 audit(1768372712.942:761): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd88158760 a2=3 a3=0 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:32.960019 kernel: audit: type=1327 audit(1768372712.942:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:32.942000 audit[5230]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd88158760 a2=3 a3=0 items=0 ppid=1 pid=5230 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:32.942000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:32.972254 systemd-logind[1614]: New session 13 of user core. Jan 14 06:38:32.981383 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 06:38:32.989000 audit[5230]: USER_START pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.997299 kernel: audit: type=1105 audit(1768372712.989:762): pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:32.999410 containerd[1642]: time="2026-01-14T06:38:32.999307101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:32.999000 audit[5234]: CRED_ACQ pid=5234 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.007303 kernel: audit: type=1103 audit(1768372712.999:763): pid=5234 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.009150 containerd[1642]: time="2026-01-14T06:38:33.007874090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:38:33.013702 kubelet[2966]: E0114 06:38:33.009692 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:33.016066 kubelet[2966]: E0114 06:38:33.014881 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:33.016066 kubelet[2966]: E0114 06:38:33.015813 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h74t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:33.017628 kubelet[2966]: E0114 06:38:33.017516 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:33.030207 containerd[1642]: time="2026-01-14T06:38:33.008098207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:33.030677 containerd[1642]: time="2026-01-14T06:38:33.016502768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:38:33.348298 containerd[1642]: time="2026-01-14T06:38:33.348203568Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:33.350141 containerd[1642]: time="2026-01-14T06:38:33.349931604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:38:33.350141 containerd[1642]: time="2026-01-14T06:38:33.350092328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:33.350772 kubelet[2966]: E0114 06:38:33.350699 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:33.351126 kubelet[2966]: E0114 06:38:33.350992 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:33.351705 kubelet[2966]: E0114 06:38:33.351583 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79tgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:33.353188 kubelet[2966]: E0114 06:38:33.353015 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:33.693047 containerd[1642]: time="2026-01-14T06:38:33.691510050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:38:33.719935 kubelet[2966]: E0114 06:38:33.718703 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:33.933098 sshd[5234]: Connection closed by 20.161.92.111 port 52336 Jan 14 06:38:33.934828 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:33.951000 audit[5230]: USER_END pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.969948 kernel: audit: type=1106 audit(1768372713.951:764): pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.971003 systemd[1]: sshd@14-10.230.41.14:22-20.161.92.111:52336.service: Deactivated successfully. Jan 14 06:38:33.960000 audit[5230]: CRED_DISP pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.979535 kernel: audit: type=1104 audit(1768372713.960:765): pid=5230 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:33.979185 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 06:38:33.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.41.14:22-20.161.92.111:52336 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:33.982365 systemd-logind[1614]: Session 13 logged out. Waiting for processes to exit. Jan 14 06:38:33.987233 systemd-logind[1614]: Removed session 13. Jan 14 06:38:34.017433 containerd[1642]: time="2026-01-14T06:38:34.017221441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:34.019368 containerd[1642]: time="2026-01-14T06:38:34.019222496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:34.019671 containerd[1642]: time="2026-01-14T06:38:34.019285797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:38:34.021610 kubelet[2966]: E0114 06:38:34.021430 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:38:34.022306 kubelet[2966]: E0114 06:38:34.021728 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:38:34.023454 kubelet[2966]: E0114 06:38:34.022345 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:34.027588 containerd[1642]: time="2026-01-14T06:38:34.027499775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:38:34.337467 containerd[1642]: time="2026-01-14T06:38:34.337198602Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:34.338787 containerd[1642]: time="2026-01-14T06:38:34.338720610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:38:34.338865 containerd[1642]: time="2026-01-14T06:38:34.338849462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:34.339241 kubelet[2966]: E0114 06:38:34.339172 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:38:34.339857 kubelet[2966]: E0114 06:38:34.339478 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:38:34.339857 kubelet[2966]: E0114 06:38:34.339767 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:34.341531 kubelet[2966]: E0114 06:38:34.341422 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:38:36.672038 kubelet[2966]: E0114 06:38:36.670739 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:38:38.667420 kubelet[2966]: E0114 06:38:38.667166 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:39.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.41.14:22-20.161.92.111:41500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:39.050933 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:38:39.051111 kernel: audit: type=1130 audit(1768372719.040:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.41.14:22-20.161.92.111:41500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:39.040343 systemd[1]: Started sshd@15-10.230.41.14:22-20.161.92.111:41500.service - OpenSSH per-connection server daemon (20.161.92.111:41500). Jan 14 06:38:39.618000 audit[5279]: USER_ACCT pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.633667 kernel: audit: type=1101 audit(1768372719.618:768): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.633950 sshd[5279]: Accepted publickey for core from 20.161.92.111 port 41500 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:39.634044 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:39.631000 audit[5279]: CRED_ACQ pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.640554 kernel: audit: type=1103 audit(1768372719.631:769): pid=5279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.640657 kernel: audit: type=1006 audit(1768372719.631:770): pid=5279 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 06:38:39.631000 audit[5279]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc319a0870 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:39.654316 kernel: audit: type=1300 audit(1768372719.631:770): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc319a0870 a2=3 a3=0 items=0 ppid=1 pid=5279 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:39.654424 kernel: audit: type=1327 audit(1768372719.631:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:39.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:39.662526 systemd-logind[1614]: New session 14 of user core. Jan 14 06:38:39.668575 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 06:38:39.677000 audit[5279]: USER_START pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.685375 kernel: audit: type=1105 audit(1768372719.677:771): pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.688000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:39.694306 kernel: audit: type=1103 audit(1768372719.688:772): pid=5285 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:40.140339 sshd[5285]: Connection closed by 20.161.92.111 port 41500 Jan 14 06:38:40.141524 sshd-session[5279]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:40.150000 audit[5279]: USER_END pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:40.169586 kernel: audit: type=1106 audit(1768372720.150:773): pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:40.169683 kernel: audit: type=1104 audit(1768372720.150:774): pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:40.150000 audit[5279]: CRED_DISP pid=5279 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:40.162831 systemd[1]: sshd@15-10.230.41.14:22-20.161.92.111:41500.service: Deactivated successfully. Jan 14 06:38:40.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.41.14:22-20.161.92.111:41500 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:40.168762 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 06:38:40.176333 systemd-logind[1614]: Session 14 logged out. Waiting for processes to exit. Jan 14 06:38:40.179518 systemd-logind[1614]: Removed session 14. Jan 14 06:38:44.669259 kubelet[2966]: E0114 06:38:44.668532 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:38:45.250999 systemd[1]: Started sshd@16-10.230.41.14:22-20.161.92.111:48700.service - OpenSSH per-connection server daemon (20.161.92.111:48700). Jan 14 06:38:45.261755 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:38:45.261991 kernel: audit: type=1130 audit(1768372725.250:776): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.41.14:22-20.161.92.111:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:45.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.41.14:22-20.161.92.111:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:45.670330 kubelet[2966]: E0114 06:38:45.668895 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:45.766000 audit[5299]: USER_ACCT pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.769012 sshd[5299]: Accepted publickey for core from 20.161.92.111 port 48700 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:45.772993 sshd-session[5299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:45.774005 kernel: audit: type=1101 audit(1768372725.766:777): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.774068 kernel: audit: type=1103 audit(1768372725.770:778): pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.770000 audit[5299]: CRED_ACQ pid=5299 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.785048 kernel: audit: type=1006 audit(1768372725.770:779): pid=5299 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 06:38:45.770000 audit[5299]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4034a950 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:45.794296 kernel: audit: type=1300 audit(1768372725.770:779): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe4034a950 a2=3 a3=0 items=0 ppid=1 pid=5299 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:45.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:45.797305 kernel: audit: type=1327 audit(1768372725.770:779): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:45.797219 systemd-logind[1614]: New session 15 of user core. Jan 14 06:38:45.803579 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 06:38:45.807000 audit[5299]: USER_START pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.815434 kernel: audit: type=1105 audit(1768372725.807:780): pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.815586 kernel: audit: type=1103 audit(1768372725.813:781): pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:45.813000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.154641 sshd[5303]: Connection closed by 20.161.92.111 port 48700 Jan 14 06:38:46.155534 sshd-session[5299]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:46.157000 audit[5299]: USER_END pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.163895 systemd[1]: sshd@16-10.230.41.14:22-20.161.92.111:48700.service: Deactivated successfully. Jan 14 06:38:46.165303 kernel: audit: type=1106 audit(1768372726.157:782): pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.157000 audit[5299]: CRED_DISP pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.167980 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 06:38:46.170651 systemd-logind[1614]: Session 15 logged out. Waiting for processes to exit. Jan 14 06:38:46.171502 kernel: audit: type=1104 audit(1768372726.157:783): pid=5299 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.41.14:22-20.161.92.111:48700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:46.173258 systemd-logind[1614]: Removed session 15. Jan 14 06:38:46.263052 systemd[1]: Started sshd@17-10.230.41.14:22-20.161.92.111:48712.service - OpenSSH per-connection server daemon (20.161.92.111:48712). Jan 14 06:38:46.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.41.14:22-20.161.92.111:48712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:46.667320 containerd[1642]: time="2026-01-14T06:38:46.666710855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:38:46.778000 audit[5316]: USER_ACCT pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.781023 sshd[5316]: Accepted publickey for core from 20.161.92.111 port 48712 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:46.780000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.780000 audit[5316]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb91aea70 a2=3 a3=0 items=0 ppid=1 pid=5316 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:46.780000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:46.783750 sshd-session[5316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:46.793057 systemd-logind[1614]: New session 16 of user core. Jan 14 06:38:46.801573 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 06:38:46.805000 audit[5316]: USER_START pid=5316 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:46.811000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.013649 containerd[1642]: time="2026-01-14T06:38:47.013565008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:47.016351 containerd[1642]: time="2026-01-14T06:38:47.014932285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:38:47.016588 containerd[1642]: time="2026-01-14T06:38:47.015050602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:47.016773 kubelet[2966]: E0114 06:38:47.016708 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:47.017311 kubelet[2966]: E0114 06:38:47.016802 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:38:47.017311 kubelet[2966]: E0114 06:38:47.017140 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27lgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:47.018562 kubelet[2966]: E0114 06:38:47.018529 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:47.281234 sshd[5320]: Connection closed by 20.161.92.111 port 48712 Jan 14 06:38:47.282579 sshd-session[5316]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:47.285000 audit[5316]: USER_END pid=5316 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.285000 audit[5316]: CRED_DISP pid=5316 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.290644 systemd-logind[1614]: Session 16 logged out. Waiting for processes to exit. Jan 14 06:38:47.292238 systemd[1]: sshd@17-10.230.41.14:22-20.161.92.111:48712.service: Deactivated successfully. Jan 14 06:38:47.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.41.14:22-20.161.92.111:48712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:47.298665 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 06:38:47.304190 systemd-logind[1614]: Removed session 16. Jan 14 06:38:47.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.41.14:22-20.161.92.111:48722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:47.383640 systemd[1]: Started sshd@18-10.230.41.14:22-20.161.92.111:48722.service - OpenSSH per-connection server daemon (20.161.92.111:48722). Jan 14 06:38:47.667000 kubelet[2966]: E0114 06:38:47.666837 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:38:47.925000 audit[5330]: USER_ACCT pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.927067 sshd[5330]: Accepted publickey for core from 20.161.92.111 port 48722 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:47.927000 audit[5330]: CRED_ACQ pid=5330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.927000 audit[5330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff642ca0b0 a2=3 a3=0 items=0 ppid=1 pid=5330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:47.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:47.929842 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:47.939032 systemd-logind[1614]: New session 17 of user core. Jan 14 06:38:47.946537 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 06:38:47.950000 audit[5330]: USER_START pid=5330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:47.954000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:48.303469 sshd[5334]: Connection closed by 20.161.92.111 port 48722 Jan 14 06:38:48.304913 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:48.306000 audit[5330]: USER_END pid=5330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:48.307000 audit[5330]: CRED_DISP pid=5330 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:48.313729 systemd[1]: sshd@18-10.230.41.14:22-20.161.92.111:48722.service: Deactivated successfully. Jan 14 06:38:48.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.41.14:22-20.161.92.111:48722 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:48.317910 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 06:38:48.319930 systemd-logind[1614]: Session 17 logged out. Waiting for processes to exit. Jan 14 06:38:48.322109 systemd-logind[1614]: Removed session 17. Jan 14 06:38:49.678907 containerd[1642]: time="2026-01-14T06:38:49.678851053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:38:49.994535 containerd[1642]: time="2026-01-14T06:38:49.994464788Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:49.995866 containerd[1642]: time="2026-01-14T06:38:49.995799390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:38:49.996012 containerd[1642]: time="2026-01-14T06:38:49.995958482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:49.996406 kubelet[2966]: E0114 06:38:49.996329 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:49.997217 kubelet[2966]: E0114 06:38:49.996428 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:38:49.997217 kubelet[2966]: E0114 06:38:49.996646 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:49.998124 kubelet[2966]: E0114 06:38:49.997836 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:38:51.667829 containerd[1642]: time="2026-01-14T06:38:51.667684219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:38:52.018708 containerd[1642]: time="2026-01-14T06:38:52.018632800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:52.020452 containerd[1642]: time="2026-01-14T06:38:52.020294425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:38:52.020452 containerd[1642]: time="2026-01-14T06:38:52.020336989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:52.020687 kubelet[2966]: E0114 06:38:52.020618 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:52.022979 kubelet[2966]: E0114 06:38:52.020709 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:38:52.022979 kubelet[2966]: E0114 06:38:52.020865 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:57f62300c7e6467f8d6b6e8d7514b501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:52.023548 containerd[1642]: time="2026-01-14T06:38:52.023518280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:38:52.345838 containerd[1642]: time="2026-01-14T06:38:52.345022328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:52.346614 containerd[1642]: time="2026-01-14T06:38:52.346266868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:38:52.346614 containerd[1642]: time="2026-01-14T06:38:52.346373161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:52.347121 kubelet[2966]: E0114 06:38:52.347053 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:52.347317 kubelet[2966]: E0114 06:38:52.347260 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:38:52.347735 kubelet[2966]: E0114 06:38:52.347634 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s8fl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7959c45994-p8pd7_calico-system(c4442e43-b3e6-4c81-9228-c5c0cde9a530): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:52.349296 kubelet[2966]: E0114 06:38:52.349129 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:38:53.407927 systemd[1]: Started sshd@19-10.230.41.14:22-20.161.92.111:52648.service - OpenSSH per-connection server daemon (20.161.92.111:52648). Jan 14 06:38:53.416337 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 06:38:53.416587 kernel: audit: type=1130 audit(1768372733.407:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.41.14:22-20.161.92.111:52648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:53.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.41.14:22-20.161.92.111:52648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:53.913000 audit[5352]: USER_ACCT pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.918000 sshd-session[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:53.920802 kernel: audit: type=1101 audit(1768372733.913:804): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.920877 kernel: audit: type=1103 audit(1768372733.915:805): pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.915000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.921009 sshd[5352]: Accepted publickey for core from 20.161.92.111 port 52648 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:53.926640 kernel: audit: type=1006 audit(1768372733.915:806): pid=5352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 06:38:53.915000 audit[5352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc132a5b50 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:53.932295 kernel: audit: type=1300 audit(1768372733.915:806): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc132a5b50 a2=3 a3=0 items=0 ppid=1 pid=5352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:53.929979 systemd-logind[1614]: New session 18 of user core. Jan 14 06:38:53.915000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:53.935125 kernel: audit: type=1327 audit(1768372733.915:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:53.938628 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 06:38:53.943000 audit[5352]: USER_START pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.951317 kernel: audit: type=1105 audit(1768372733.943:807): pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.946000 audit[5356]: CRED_ACQ pid=5356 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:53.956336 kernel: audit: type=1103 audit(1768372733.946:808): pid=5356 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:54.296310 sshd[5356]: Connection closed by 20.161.92.111 port 52648 Jan 14 06:38:54.296588 sshd-session[5352]: pam_unix(sshd:session): session closed for user core Jan 14 06:38:54.297000 audit[5352]: USER_END pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:54.297000 audit[5352]: CRED_DISP pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:54.307479 kernel: audit: type=1106 audit(1768372734.297:809): pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:54.307583 kernel: audit: type=1104 audit(1768372734.297:810): pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:54.308470 systemd-logind[1614]: Session 18 logged out. Waiting for processes to exit. Jan 14 06:38:54.309654 systemd[1]: sshd@19-10.230.41.14:22-20.161.92.111:52648.service: Deactivated successfully. Jan 14 06:38:54.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.41.14:22-20.161.92.111:52648 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:54.314942 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 06:38:54.320237 systemd-logind[1614]: Removed session 18. Jan 14 06:38:57.670980 containerd[1642]: time="2026-01-14T06:38:57.670544145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:38:57.982743 containerd[1642]: time="2026-01-14T06:38:57.982663786Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:57.984377 containerd[1642]: time="2026-01-14T06:38:57.984306970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:38:57.984548 containerd[1642]: time="2026-01-14T06:38:57.984310480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:57.985006 kubelet[2966]: E0114 06:38:57.984798 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:57.985006 kubelet[2966]: E0114 06:38:57.984882 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:38:57.985590 kubelet[2966]: E0114 06:38:57.985090 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79tgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:38:57.986918 kubelet[2966]: E0114 06:38:57.986864 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:38:58.667359 kubelet[2966]: E0114 06:38:58.667061 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:38:59.092813 systemd[1]: Started sshd@20-10.230.41.14:22-64.225.73.213:47938.service - OpenSSH per-connection server daemon (64.225.73.213:47938). Jan 14 06:38:59.104718 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:38:59.104930 kernel: audit: type=1130 audit(1768372739.092:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.41.14:22-64.225.73.213:47938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.41.14:22-64.225.73.213:47938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.399114 systemd[1]: Started sshd@21-10.230.41.14:22-20.161.92.111:52658.service - OpenSSH per-connection server daemon (20.161.92.111:52658). Jan 14 06:38:59.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.41.14:22-20.161.92.111:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.405344 kernel: audit: type=1130 audit(1768372739.398:813): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.41.14:22-20.161.92.111:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.466746 sshd[5371]: Invalid user postgres from 64.225.73.213 port 47938 Jan 14 06:38:59.523431 sshd[5371]: Connection closed by invalid user postgres 64.225.73.213 port 47938 [preauth] Jan 14 06:38:59.523000 audit[5371]: USER_ERR pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:38:59.528919 systemd[1]: sshd@20-10.230.41.14:22-64.225.73.213:47938.service: Deactivated successfully. Jan 14 06:38:59.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.41.14:22-64.225.73.213:47938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.531786 kernel: audit: type=1109 audit(1768372739.523:814): pid=5371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:38:59.531863 kernel: audit: type=1131 audit(1768372739.528:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.41.14:22-64.225.73.213:47938 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:38:59.670650 containerd[1642]: time="2026-01-14T06:38:59.670439478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:38:59.916000 audit[5378]: USER_ACCT pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.932881 kernel: audit: type=1101 audit(1768372739.916:816): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.933061 kernel: audit: type=1103 audit(1768372739.926:817): pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.926000 audit[5378]: CRED_ACQ pid=5378 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.934540 sshd[5378]: Accepted publickey for core from 20.161.92.111 port 52658 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:38:59.929586 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:38:59.942122 kernel: audit: type=1006 audit(1768372739.926:818): pid=5378 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 06:38:59.942234 kernel: audit: type=1300 audit(1768372739.926:818): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe05ccff50 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:59.926000 audit[5378]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe05ccff50 a2=3 a3=0 items=0 ppid=1 pid=5378 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:38:59.926000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:59.950392 kernel: audit: type=1327 audit(1768372739.926:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:38:59.950838 systemd-logind[1614]: New session 19 of user core. Jan 14 06:38:59.959742 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 06:38:59.966000 audit[5378]: USER_START pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.974346 kernel: audit: type=1105 audit(1768372739.966:819): pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.970000 audit[5386]: CRED_ACQ pid=5386 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:38:59.993483 containerd[1642]: time="2026-01-14T06:38:59.993386028Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:38:59.996116 containerd[1642]: time="2026-01-14T06:38:59.995973247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:38:59.996664 containerd[1642]: time="2026-01-14T06:38:59.996025493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:38:59.998668 kubelet[2966]: E0114 06:38:59.998532 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:39:00.000206 kubelet[2966]: E0114 06:38:59.998805 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:39:00.001940 kubelet[2966]: E0114 06:39:00.001798 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:00.006917 containerd[1642]: time="2026-01-14T06:39:00.006881844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:39:00.325002 containerd[1642]: time="2026-01-14T06:39:00.323998216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:39:00.326183 containerd[1642]: time="2026-01-14T06:39:00.326016149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:39:00.326183 containerd[1642]: time="2026-01-14T06:39:00.326129992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:39:00.326772 kubelet[2966]: E0114 06:39:00.326652 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:39:00.327240 kubelet[2966]: E0114 06:39:00.326885 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:39:00.327240 kubelet[2966]: E0114 06:39:00.327155 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhn5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8q82z_calico-system(a91d79c6-e300-47ea-a44e-e654a57c8864): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:00.328730 kubelet[2966]: E0114 06:39:00.328658 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:39:00.361311 sshd[5386]: Connection closed by 20.161.92.111 port 52658 Jan 14 06:39:00.362388 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:00.365000 audit[5378]: USER_END pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:00.366000 audit[5378]: CRED_DISP pid=5378 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:00.371637 systemd[1]: sshd@21-10.230.41.14:22-20.161.92.111:52658.service: Deactivated successfully. Jan 14 06:39:00.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.41.14:22-20.161.92.111:52658 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:00.376145 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 06:39:00.378988 systemd-logind[1614]: Session 19 logged out. Waiting for processes to exit. Jan 14 06:39:00.380623 systemd-logind[1614]: Removed session 19. Jan 14 06:39:00.668534 containerd[1642]: time="2026-01-14T06:39:00.668058707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:39:00.978960 containerd[1642]: time="2026-01-14T06:39:00.978659325Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:39:00.980217 containerd[1642]: time="2026-01-14T06:39:00.980014450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:39:00.980885 containerd[1642]: time="2026-01-14T06:39:00.980387661Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:39:00.980993 kubelet[2966]: E0114 06:39:00.980773 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:39:00.980993 kubelet[2966]: E0114 06:39:00.980841 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:39:00.981838 kubelet[2966]: E0114 06:39:00.981681 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h74t2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-rswq7_calico-apiserver(a8d8745b-48bb-4a89-9b0b-07086983dbe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:00.983329 kubelet[2966]: E0114 06:39:00.983290 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:39:05.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.41.14:22-20.161.92.111:57198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:05.482223 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:39:05.482392 kernel: audit: type=1130 audit(1768372745.470:824): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.41.14:22-20.161.92.111:57198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:05.470415 systemd[1]: Started sshd@22-10.230.41.14:22-20.161.92.111:57198.service - OpenSSH per-connection server daemon (20.161.92.111:57198). Jan 14 06:39:05.673834 kubelet[2966]: E0114 06:39:05.673556 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:39:05.677370 kubelet[2966]: E0114 06:39:05.677323 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:39:06.023000 audit[5399]: USER_ACCT pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.024476 sshd[5399]: Accepted publickey for core from 20.161.92.111 port 57198 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:06.029349 kernel: audit: type=1101 audit(1768372746.023:825): pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.029000 audit[5399]: CRED_ACQ pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.033334 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:06.035621 kernel: audit: type=1103 audit(1768372746.029:826): pid=5399 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.035765 kernel: audit: type=1006 audit(1768372746.031:827): pid=5399 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 06:39:06.031000 audit[5399]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81729190 a2=3 a3=0 items=0 ppid=1 pid=5399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:06.040944 kernel: audit: type=1300 audit(1768372746.031:827): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc81729190 a2=3 a3=0 items=0 ppid=1 pid=5399 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:06.031000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:06.055837 kernel: audit: type=1327 audit(1768372746.031:827): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:06.062262 systemd-logind[1614]: New session 20 of user core. Jan 14 06:39:06.074580 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 06:39:06.080000 audit[5399]: USER_START pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.088114 kernel: audit: type=1105 audit(1768372746.080:828): pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.090000 audit[5403]: CRED_ACQ pid=5403 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.096402 kernel: audit: type=1103 audit(1768372746.090:829): pid=5403 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.439115 sshd[5403]: Connection closed by 20.161.92.111 port 57198 Jan 14 06:39:06.440649 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:06.443000 audit[5399]: USER_END pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.451310 kernel: audit: type=1106 audit(1768372746.443:830): pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.451430 kernel: audit: type=1104 audit(1768372746.443:831): pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.443000 audit[5399]: CRED_DISP pid=5399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:06.455429 systemd[1]: sshd@22-10.230.41.14:22-20.161.92.111:57198.service: Deactivated successfully. Jan 14 06:39:06.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.41.14:22-20.161.92.111:57198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:06.462266 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 06:39:06.467465 systemd-logind[1614]: Session 20 logged out. Waiting for processes to exit. Jan 14 06:39:06.470696 systemd-logind[1614]: Removed session 20. Jan 14 06:39:10.668216 kubelet[2966]: E0114 06:39:10.668050 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:39:11.544846 systemd[1]: Started sshd@23-10.230.41.14:22-20.161.92.111:57208.service - OpenSSH per-connection server daemon (20.161.92.111:57208). Jan 14 06:39:11.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.41.14:22-20.161.92.111:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:11.553440 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:39:11.553623 kernel: audit: type=1130 audit(1768372751.544:833): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.41.14:22-20.161.92.111:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:12.066000 audit[5442]: USER_ACCT pid=5442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.072969 sshd[5442]: Accepted publickey for core from 20.161.92.111 port 57208 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:12.073435 kernel: audit: type=1101 audit(1768372752.066:834): pid=5442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.072000 audit[5442]: CRED_ACQ pid=5442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.087032 kernel: audit: type=1103 audit(1768372752.072:835): pid=5442 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.087110 kernel: audit: type=1006 audit(1768372752.072:836): pid=5442 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 06:39:12.074266 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:12.072000 audit[5442]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd24b01450 a2=3 a3=0 items=0 ppid=1 pid=5442 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:12.091307 kernel: audit: type=1300 audit(1768372752.072:836): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd24b01450 a2=3 a3=0 items=0 ppid=1 pid=5442 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:12.072000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:12.096014 kernel: audit: type=1327 audit(1768372752.072:836): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:12.101046 systemd-logind[1614]: New session 21 of user core. Jan 14 06:39:12.109542 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 06:39:12.116000 audit[5442]: USER_START pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.128000 audit[5446]: CRED_ACQ pid=5446 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.131626 kernel: audit: type=1105 audit(1768372752.116:837): pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.131775 kernel: audit: type=1103 audit(1768372752.128:838): pid=5446 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.467505 sshd[5446]: Connection closed by 20.161.92.111 port 57208 Jan 14 06:39:12.468234 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:12.483596 kernel: audit: type=1106 audit(1768372752.470:839): pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.470000 audit[5442]: USER_END pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.486432 systemd[1]: sshd@23-10.230.41.14:22-20.161.92.111:57208.service: Deactivated successfully. Jan 14 06:39:12.471000 audit[5442]: CRED_DISP pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.494374 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 06:39:12.497388 kernel: audit: type=1104 audit(1768372752.471:840): pid=5442 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:12.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.41.14:22-20.161.92.111:57208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:12.498481 systemd-logind[1614]: Session 21 logged out. Waiting for processes to exit. Jan 14 06:39:12.506578 systemd-logind[1614]: Removed session 21. Jan 14 06:39:12.576767 systemd[1]: Started sshd@24-10.230.41.14:22-20.161.92.111:58612.service - OpenSSH per-connection server daemon (20.161.92.111:58612). Jan 14 06:39:12.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.41.14:22-20.161.92.111:58612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:13.084000 audit[5457]: USER_ACCT pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.086415 sshd[5457]: Accepted publickey for core from 20.161.92.111 port 58612 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:13.086000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.087000 audit[5457]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc64f9eec0 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:13.087000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:13.089914 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:13.099388 systemd-logind[1614]: New session 22 of user core. Jan 14 06:39:13.107527 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 06:39:13.112000 audit[5457]: USER_START pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.115000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.677465 kubelet[2966]: E0114 06:39:13.674261 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:39:13.677465 kubelet[2966]: E0114 06:39:13.677292 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:39:13.884740 sshd[5461]: Connection closed by 20.161.92.111 port 58612 Jan 14 06:39:13.886681 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:13.888000 audit[5457]: USER_END pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.888000 audit[5457]: CRED_DISP pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:13.896023 systemd[1]: sshd@24-10.230.41.14:22-20.161.92.111:58612.service: Deactivated successfully. Jan 14 06:39:13.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.41.14:22-20.161.92.111:58612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:13.900557 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 06:39:13.905458 systemd-logind[1614]: Session 22 logged out. Waiting for processes to exit. Jan 14 06:39:13.907117 systemd-logind[1614]: Removed session 22. Jan 14 06:39:13.993219 systemd[1]: Started sshd@25-10.230.41.14:22-20.161.92.111:58622.service - OpenSSH per-connection server daemon (20.161.92.111:58622). Jan 14 06:39:13.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.41.14:22-20.161.92.111:58622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:14.566000 audit[5471]: USER_ACCT pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:14.568230 sshd[5471]: Accepted publickey for core from 20.161.92.111 port 58622 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:14.569000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:14.569000 audit[5471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc04253da0 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:14.569000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:14.572896 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:14.585377 systemd-logind[1614]: New session 23 of user core. Jan 14 06:39:14.592907 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 06:39:14.598000 audit[5471]: USER_START pid=5471 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:14.602000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:14.669354 kubelet[2966]: E0114 06:39:14.668613 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:39:15.701000 audit[5488]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:15.701000 audit[5488]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffe9501e3a0 a2=0 a3=7ffe9501e38c items=0 ppid=3073 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:15.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:15.709000 audit[5488]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:15.709000 audit[5488]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe9501e3a0 a2=0 a3=0 items=0 ppid=3073 pid=5488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:15.709000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:15.714302 sshd[5476]: Connection closed by 20.161.92.111 port 58622 Jan 14 06:39:15.715194 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:15.717000 audit[5471]: USER_END pid=5471 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:15.717000 audit[5471]: CRED_DISP pid=5471 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:15.725007 systemd[1]: sshd@25-10.230.41.14:22-20.161.92.111:58622.service: Deactivated successfully. Jan 14 06:39:15.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.41.14:22-20.161.92.111:58622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:15.731782 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 06:39:15.741116 systemd-logind[1614]: Session 23 logged out. Waiting for processes to exit. Jan 14 06:39:15.744597 systemd-logind[1614]: Removed session 23. Jan 14 06:39:15.762000 audit[5493]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:15.762000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc4a81b8a0 a2=0 a3=7ffc4a81b88c items=0 ppid=3073 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:15.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:15.769000 audit[5493]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:15.769000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc4a81b8a0 a2=0 a3=0 items=0 ppid=3073 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:15.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:15.820795 systemd[1]: Started sshd@26-10.230.41.14:22-20.161.92.111:58632.service - OpenSSH per-connection server daemon (20.161.92.111:58632). Jan 14 06:39:15.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.41.14:22-20.161.92.111:58632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:16.331000 audit[5495]: USER_ACCT pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:16.332838 sshd[5495]: Accepted publickey for core from 20.161.92.111 port 58632 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:16.333000 audit[5495]: CRED_ACQ pid=5495 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:16.333000 audit[5495]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe164e9780 a2=3 a3=0 items=0 ppid=1 pid=5495 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:16.333000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:16.335798 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:16.344603 systemd-logind[1614]: New session 24 of user core. Jan 14 06:39:16.351562 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 06:39:16.355000 audit[5495]: USER_START pid=5495 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:16.359000 audit[5503]: CRED_ACQ pid=5503 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:16.995646 sshd[5503]: Connection closed by 20.161.92.111 port 58632 Jan 14 06:39:16.996311 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:17.006462 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 06:39:17.006772 kernel: audit: type=1106 audit(1768372756.998:870): pid=5495 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:16.998000 audit[5495]: USER_END pid=5495 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.008679 systemd[1]: sshd@26-10.230.41.14:22-20.161.92.111:58632.service: Deactivated successfully. Jan 14 06:39:17.000000 audit[5495]: CRED_DISP pid=5495 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.014316 kernel: audit: type=1104 audit(1768372757.000:871): pid=5495 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.011933 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 06:39:17.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.41.14:22-20.161.92.111:58632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:17.018414 systemd-logind[1614]: Session 24 logged out. Waiting for processes to exit. Jan 14 06:39:17.021136 kernel: audit: type=1131 audit(1768372757.007:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.41.14:22-20.161.92.111:58632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:17.023189 systemd-logind[1614]: Removed session 24. Jan 14 06:39:17.104020 systemd[1]: Started sshd@27-10.230.41.14:22-20.161.92.111:58638.service - OpenSSH per-connection server daemon (20.161.92.111:58638). Jan 14 06:39:17.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.41.14:22-20.161.92.111:58638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:17.110179 kernel: audit: type=1130 audit(1768372757.103:873): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.41.14:22-20.161.92.111:58638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:17.628000 audit[5512]: USER_ACCT pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.631349 sshd[5512]: Accepted publickey for core from 20.161.92.111 port 58638 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:17.633992 sshd-session[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:17.635327 kernel: audit: type=1101 audit(1768372757.628:874): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.631000 audit[5512]: CRED_ACQ pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.646916 kernel: audit: type=1103 audit(1768372757.631:875): pid=5512 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.647028 kernel: audit: type=1006 audit(1768372757.631:876): pid=5512 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 14 06:39:17.631000 audit[5512]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe02b8f7a0 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:17.651094 kernel: audit: type=1300 audit(1768372757.631:876): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe02b8f7a0 a2=3 a3=0 items=0 ppid=1 pid=5512 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:17.631000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:17.654422 systemd-logind[1614]: New session 25 of user core. Jan 14 06:39:17.658045 kernel: audit: type=1327 audit(1768372757.631:876): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:17.664637 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 06:39:17.672000 audit[5512]: USER_START pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.677334 kubelet[2966]: E0114 06:39:17.675408 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:39:17.680316 kernel: audit: type=1105 audit(1768372757.672:877): pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:17.679000 audit[5516]: CRED_ACQ pid=5516 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:18.000349 sshd[5516]: Connection closed by 20.161.92.111 port 58638 Jan 14 06:39:18.001458 sshd-session[5512]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:18.002000 audit[5512]: USER_END pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:18.002000 audit[5512]: CRED_DISP pid=5512 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:18.009862 systemd[1]: sshd@27-10.230.41.14:22-20.161.92.111:58638.service: Deactivated successfully. Jan 14 06:39:18.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.41.14:22-20.161.92.111:58638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:18.014191 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 06:39:18.016455 systemd-logind[1614]: Session 25 logged out. Waiting for processes to exit. Jan 14 06:39:18.018876 systemd-logind[1614]: Removed session 25. Jan 14 06:39:18.669033 kubelet[2966]: E0114 06:39:18.668949 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:39:23.104059 systemd[1]: Started sshd@28-10.230.41.14:22-20.161.92.111:60900.service - OpenSSH per-connection server daemon (20.161.92.111:60900). Jan 14 06:39:23.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.41.14:22-20.161.92.111:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:23.113916 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:39:23.114069 kernel: audit: type=1130 audit(1768372763.103:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.41.14:22-20.161.92.111:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:23.623000 audit[5529]: USER_ACCT pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.630489 kernel: audit: type=1101 audit(1768372763.623:883): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.628959 sshd-session[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:23.631288 sshd[5529]: Accepted publickey for core from 20.161.92.111 port 60900 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:23.625000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.640056 kernel: audit: type=1103 audit(1768372763.625:884): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.640158 kernel: audit: type=1006 audit(1768372763.625:885): pid=5529 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 14 06:39:23.639417 systemd-logind[1614]: New session 26 of user core. Jan 14 06:39:23.625000 audit[5529]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58ff58b0 a2=3 a3=0 items=0 ppid=1 pid=5529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:23.625000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:23.648368 kernel: audit: type=1300 audit(1768372763.625:885): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd58ff58b0 a2=3 a3=0 items=0 ppid=1 pid=5529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:23.648596 kernel: audit: type=1327 audit(1768372763.625:885): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:23.649555 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 06:39:23.655000 audit[5529]: USER_START pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.663328 kernel: audit: type=1105 audit(1768372763.655:886): pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.663482 kernel: audit: type=1103 audit(1768372763.661:887): pid=5533 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:23.661000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:24.016156 sshd[5533]: Connection closed by 20.161.92.111 port 60900 Jan 14 06:39:24.017126 sshd-session[5529]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:24.018000 audit[5529]: USER_END pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:24.025065 systemd[1]: sshd@28-10.230.41.14:22-20.161.92.111:60900.service: Deactivated successfully. Jan 14 06:39:24.027311 kernel: audit: type=1106 audit(1768372764.018:888): pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:24.019000 audit[5529]: CRED_DISP pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:24.028805 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 06:39:24.032780 systemd-logind[1614]: Session 26 logged out. Waiting for processes to exit. Jan 14 06:39:24.033367 kernel: audit: type=1104 audit(1768372764.019:889): pid=5529 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:24.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.41.14:22-20.161.92.111:60900 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:24.035475 systemd-logind[1614]: Removed session 26. Jan 14 06:39:25.671793 kubelet[2966]: E0114 06:39:25.671688 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:39:26.021000 audit[5544]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:26.021000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe855115d0 a2=0 a3=7ffe855115bc items=0 ppid=3073 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:26.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:26.031000 audit[5544]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:39:26.031000 audit[5544]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7ffe855115d0 a2=0 a3=7ffe855115bc items=0 ppid=3073 pid=5544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:26.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:39:28.667248 kubelet[2966]: E0114 06:39:28.667172 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:39:28.667997 kubelet[2966]: E0114 06:39:28.667371 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:39:28.669114 containerd[1642]: time="2026-01-14T06:39:28.668705542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:39:28.982317 containerd[1642]: time="2026-01-14T06:39:28.982182910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:39:28.983984 containerd[1642]: time="2026-01-14T06:39:28.983818135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:39:28.983984 containerd[1642]: time="2026-01-14T06:39:28.983946591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:39:28.984534 kubelet[2966]: E0114 06:39:28.984448 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:39:28.984910 kubelet[2966]: E0114 06:39:28.984571 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:39:28.985035 kubelet[2966]: E0114 06:39:28.984949 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27lgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kv7ql_calico-system(217872a4-2508-46c6-a68b-d9c0e654e8b7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:28.987444 kubelet[2966]: E0114 06:39:28.987390 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:39:29.120946 systemd[1]: Started sshd@29-10.230.41.14:22-20.161.92.111:60916.service - OpenSSH per-connection server daemon (20.161.92.111:60916). Jan 14 06:39:29.134251 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 06:39:29.134489 kernel: audit: type=1130 audit(1768372769.119:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.41.14:22-20.161.92.111:60916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:29.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.41.14:22-20.161.92.111:60916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:29.641000 audit[5546]: USER_ACCT pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.648154 sshd[5546]: Accepted publickey for core from 20.161.92.111 port 60916 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:29.650396 kernel: audit: type=1101 audit(1768372769.641:894): pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.650477 kernel: audit: type=1103 audit(1768372769.647:895): pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.647000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.651449 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:29.669609 kernel: audit: type=1006 audit(1768372769.647:896): pid=5546 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 06:39:29.669810 kernel: audit: type=1300 audit(1768372769.647:896): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe22843800 a2=3 a3=0 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:29.647000 audit[5546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe22843800 a2=3 a3=0 items=0 ppid=1 pid=5546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:29.671298 kernel: audit: type=1327 audit(1768372769.647:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:29.647000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:29.684044 systemd-logind[1614]: New session 27 of user core. Jan 14 06:39:29.690598 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 06:39:29.695000 audit[5546]: USER_START pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.704301 kernel: audit: type=1105 audit(1768372769.695:897): pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.703000 audit[5550]: CRED_ACQ pid=5550 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:29.710356 kernel: audit: type=1103 audit(1768372769.703:898): pid=5550 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:30.060502 sshd[5550]: Connection closed by 20.161.92.111 port 60916 Jan 14 06:39:30.061234 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:30.064000 audit[5546]: USER_END pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:30.073299 kernel: audit: type=1106 audit(1768372770.064:899): pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:30.065000 audit[5546]: CRED_DISP pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:30.074563 systemd[1]: sshd@29-10.230.41.14:22-20.161.92.111:60916.service: Deactivated successfully. Jan 14 06:39:30.078298 kernel: audit: type=1104 audit(1768372770.065:900): pid=5546 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:30.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.41.14:22-20.161.92.111:60916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:30.083204 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 06:39:30.085456 systemd-logind[1614]: Session 27 logged out. Waiting for processes to exit. Jan 14 06:39:30.088557 systemd-logind[1614]: Removed session 27. Jan 14 06:39:30.666963 containerd[1642]: time="2026-01-14T06:39:30.666482356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:39:30.979212 containerd[1642]: time="2026-01-14T06:39:30.979116484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:39:30.980638 containerd[1642]: time="2026-01-14T06:39:30.980549713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:39:30.980784 containerd[1642]: time="2026-01-14T06:39:30.980650909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:39:30.981103 kubelet[2966]: E0114 06:39:30.980996 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:39:30.981922 kubelet[2966]: E0114 06:39:30.981169 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:39:30.981922 kubelet[2966]: E0114 06:39:30.981507 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jd75s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bddfbd4b9-ps4c2_calico-apiserver(d6749f8c-3427-433c-a8c4-8f87f70b4d79): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:30.982897 kubelet[2966]: E0114 06:39:30.982833 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79" Jan 14 06:39:31.672017 kubelet[2966]: E0114 06:39:31.671846 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7959c45994-p8pd7" podUID="c4442e43-b3e6-4c81-9228-c5c0cde9a530" Jan 14 06:39:35.053697 systemd[1]: Started sshd@30-10.230.41.14:22-64.225.73.213:50730.service - OpenSSH per-connection server daemon (64.225.73.213:50730). Jan 14 06:39:35.061610 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:39:35.061746 kernel: audit: type=1130 audit(1768372775.052:902): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.41.14:22-64.225.73.213:50730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.41.14:22-64.225.73.213:50730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.41.14:22-20.161.92.111:39106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.167604 systemd[1]: Started sshd@31-10.230.41.14:22-20.161.92.111:39106.service - OpenSSH per-connection server daemon (20.161.92.111:39106). Jan 14 06:39:35.177327 kernel: audit: type=1130 audit(1768372775.166:903): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.41.14:22-20.161.92.111:39106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.291932 sshd[5570]: Invalid user postgres from 64.225.73.213 port 50730 Jan 14 06:39:35.317422 sshd[5570]: Connection closed by invalid user postgres 64.225.73.213 port 50730 [preauth] Jan 14 06:39:35.316000 audit[5570]: USER_ERR pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:39:35.322309 kernel: audit: type=1109 audit(1768372775.316:904): pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:39:35.322989 systemd[1]: sshd@30-10.230.41.14:22-64.225.73.213:50730.service: Deactivated successfully. Jan 14 06:39:35.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.41.14:22-64.225.73.213:50730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.331324 kernel: audit: type=1131 audit(1768372775.322:905): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.41.14:22-64.225.73.213:50730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:35.715843 sshd[5573]: Accepted publickey for core from 20.161.92.111 port 39106 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:35.737959 kernel: audit: type=1101 audit(1768372775.714:906): pid=5573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.738082 kernel: audit: type=1103 audit(1768372775.730:907): pid=5573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.714000 audit[5573]: USER_ACCT pid=5573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.730000 audit[5573]: CRED_ACQ pid=5573 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.733516 sshd-session[5573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:35.750300 kernel: audit: type=1006 audit(1768372775.730:908): pid=5573 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 06:39:35.730000 audit[5573]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4ce39290 a2=3 a3=0 items=0 ppid=1 pid=5573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:35.761372 kernel: audit: type=1300 audit(1768372775.730:908): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4ce39290 a2=3 a3=0 items=0 ppid=1 pid=5573 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:35.767437 systemd-logind[1614]: New session 28 of user core. Jan 14 06:39:35.730000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:35.771787 kernel: audit: type=1327 audit(1768372775.730:908): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:35.770583 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 06:39:35.777000 audit[5573]: USER_START pid=5573 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.791593 kernel: audit: type=1105 audit(1768372775.777:909): pid=5573 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:35.791000 audit[5580]: CRED_ACQ pid=5580 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:36.312505 sshd[5580]: Connection closed by 20.161.92.111 port 39106 Jan 14 06:39:36.313646 sshd-session[5573]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:36.317000 audit[5573]: USER_END pid=5573 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:36.317000 audit[5573]: CRED_DISP pid=5573 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:36.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.41.14:22-20.161.92.111:39106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:36.323953 systemd-logind[1614]: Session 28 logged out. Waiting for processes to exit. Jan 14 06:39:36.324518 systemd[1]: sshd@31-10.230.41.14:22-20.161.92.111:39106.service: Deactivated successfully. Jan 14 06:39:36.329744 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 06:39:36.333852 systemd-logind[1614]: Removed session 28. Jan 14 06:39:37.669401 kubelet[2966]: E0114 06:39:37.669313 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8q82z" podUID="a91d79c6-e300-47ea-a44e-e654a57c8864" Jan 14 06:39:39.669647 kubelet[2966]: E0114 06:39:39.668106 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-rswq7" podUID="a8d8745b-48bb-4a89-9b0b-07086983dbe4" Jan 14 06:39:39.698638 containerd[1642]: time="2026-01-14T06:39:39.698570570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:39:40.006113 containerd[1642]: time="2026-01-14T06:39:40.005772079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:39:40.007657 containerd[1642]: time="2026-01-14T06:39:40.007409994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:39:40.007657 containerd[1642]: time="2026-01-14T06:39:40.007509775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:39:40.008272 kubelet[2966]: E0114 06:39:40.008215 2966 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:39:40.008470 kubelet[2966]: E0114 06:39:40.008440 2966 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:39:40.009695 kubelet[2966]: E0114 06:39:40.009612 2966 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79tgx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-745df5bdfc-85fpc_calico-system(fcd4f250-b1e6-467c-90cd-24e53dcbe8e8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:39:40.011180 kubelet[2966]: E0114 06:39:40.011120 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-745df5bdfc-85fpc" podUID="fcd4f250-b1e6-467c-90cd-24e53dcbe8e8" Jan 14 06:39:40.666930 kubelet[2966]: E0114 06:39:40.666867 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kv7ql" podUID="217872a4-2508-46c6-a68b-d9c0e654e8b7" Jan 14 06:39:41.434081 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:39:41.434599 kernel: audit: type=1130 audit(1768372781.422:914): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.41.14:22-20.161.92.111:39108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:41.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.41.14:22-20.161.92.111:39108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:41.422511 systemd[1]: Started sshd@32-10.230.41.14:22-20.161.92.111:39108.service - OpenSSH per-connection server daemon (20.161.92.111:39108). Jan 14 06:39:42.022842 sshd[5617]: Accepted publickey for core from 20.161.92.111 port 39108 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:39:42.022000 audit[5617]: USER_ACCT pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.030224 sshd-session[5617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:39:42.032375 kernel: audit: type=1101 audit(1768372782.022:915): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.028000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.042663 systemd-logind[1614]: New session 29 of user core. Jan 14 06:39:42.044291 kernel: audit: type=1103 audit(1768372782.028:916): pid=5617 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.048361 kernel: audit: type=1006 audit(1768372782.028:917): pid=5617 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 06:39:42.028000 audit[5617]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebe580340 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:42.051815 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 06:39:42.055420 kernel: audit: type=1300 audit(1768372782.028:917): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffebe580340 a2=3 a3=0 items=0 ppid=1 pid=5617 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:39:42.028000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:42.058337 kernel: audit: type=1327 audit(1768372782.028:917): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:39:42.060000 audit[5617]: USER_START pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.067349 kernel: audit: type=1105 audit(1768372782.060:918): pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.069000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.077337 kernel: audit: type=1103 audit(1768372782.069:919): pid=5634 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.669930 sshd[5634]: Connection closed by 20.161.92.111 port 39108 Jan 14 06:39:42.670890 sshd-session[5617]: pam_unix(sshd:session): session closed for user core Jan 14 06:39:42.672000 audit[5617]: USER_END pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.680463 kernel: audit: type=1106 audit(1768372782.672:920): pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.673000 audit[5617]: CRED_DISP pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.685110 systemd[1]: sshd@32-10.230.41.14:22-20.161.92.111:39108.service: Deactivated successfully. Jan 14 06:39:42.690429 kernel: audit: type=1104 audit(1768372782.673:921): pid=5617 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:39:42.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.41.14:22-20.161.92.111:39108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:39:42.691420 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 06:39:42.697103 systemd-logind[1614]: Session 29 logged out. Waiting for processes to exit. Jan 14 06:39:42.702418 systemd-logind[1614]: Removed session 29. Jan 14 06:39:45.667937 kubelet[2966]: E0114 06:39:45.667771 2966 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bddfbd4b9-ps4c2" podUID="d6749f8c-3427-433c-a8c4-8f87f70b4d79"