Jan 14 06:25:10.225488 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 14 03:30:44 -00 2026 Jan 14 06:25:10.225526 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:25:10.225540 kernel: BIOS-provided physical RAM map: Jan 14 06:25:10.225551 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 14 06:25:10.225565 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 14 06:25:10.225575 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 14 06:25:10.225587 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 14 06:25:10.225606 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 14 06:25:10.225617 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 14 06:25:10.225628 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 14 06:25:10.225668 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 14 06:25:10.225679 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 14 06:25:10.225690 kernel: NX (Execute Disable) protection: active Jan 14 06:25:10.225706 kernel: APIC: Static calls initialized Jan 14 06:25:10.225719 kernel: SMBIOS 2.8 present. Jan 14 06:25:10.225731 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 14 06:25:10.225743 kernel: DMI: Memory slots populated: 1/1 Jan 14 06:25:10.225758 kernel: Hypervisor detected: KVM Jan 14 06:25:10.225770 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 06:25:10.225782 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 14 06:25:10.225793 kernel: kvm-clock: using sched offset of 5009299166 cycles Jan 14 06:25:10.225805 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 14 06:25:10.225818 kernel: tsc: Detected 2799.998 MHz processor Jan 14 06:25:10.225830 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 06:25:10.225842 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 06:25:10.225857 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 14 06:25:10.225869 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 14 06:25:10.225881 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 06:25:10.225893 kernel: Using GB pages for direct mapping Jan 14 06:25:10.225905 kernel: ACPI: Early table checksum verification disabled Jan 14 06:25:10.225917 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 14 06:25:10.225929 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.225941 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.225956 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.225968 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 14 06:25:10.225980 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.225992 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.226004 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.226016 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 06:25:10.226028 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 14 06:25:10.226047 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 14 06:25:10.226060 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 14 06:25:10.226072 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 14 06:25:10.226085 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 14 06:25:10.226101 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 14 06:25:10.226113 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 14 06:25:10.226125 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 14 06:25:10.226138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 14 06:25:10.226150 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 14 06:25:10.226168 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Jan 14 06:25:10.226181 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Jan 14 06:25:10.226197 kernel: Zone ranges: Jan 14 06:25:10.226209 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 06:25:10.226221 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 14 06:25:10.226239 kernel: Normal empty Jan 14 06:25:10.226251 kernel: Device empty Jan 14 06:25:10.226264 kernel: Movable zone start for each node Jan 14 06:25:10.226276 kernel: Early memory node ranges Jan 14 06:25:10.226288 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 14 06:25:10.226304 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 14 06:25:10.226316 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 14 06:25:10.226328 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 06:25:10.226341 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 14 06:25:10.226353 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 14 06:25:10.226365 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 14 06:25:10.226384 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 14 06:25:10.226402 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 14 06:25:10.226415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 14 06:25:10.226427 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 14 06:25:10.226441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 06:25:10.226454 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 14 06:25:10.226466 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 14 06:25:10.226478 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 06:25:10.226491 kernel: TSC deadline timer available Jan 14 06:25:10.226515 kernel: CPU topo: Max. logical packages: 16 Jan 14 06:25:10.226527 kernel: CPU topo: Max. logical dies: 16 Jan 14 06:25:10.226540 kernel: CPU topo: Max. dies per package: 1 Jan 14 06:25:10.226552 kernel: CPU topo: Max. threads per core: 1 Jan 14 06:25:10.226564 kernel: CPU topo: Num. cores per package: 1 Jan 14 06:25:10.226576 kernel: CPU topo: Num. threads per package: 1 Jan 14 06:25:10.226588 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Jan 14 06:25:10.226604 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 14 06:25:10.226616 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 14 06:25:10.226630 kernel: Booting paravirtualized kernel on KVM Jan 14 06:25:10.227306 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 06:25:10.227320 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 14 06:25:10.227333 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Jan 14 06:25:10.227345 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Jan 14 06:25:10.227358 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 14 06:25:10.227377 kernel: kvm-guest: PV spinlocks enabled Jan 14 06:25:10.227390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 06:25:10.227404 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:25:10.227417 kernel: random: crng init done Jan 14 06:25:10.227429 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 06:25:10.227442 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 06:25:10.227458 kernel: Fallback order for Node 0: 0 Jan 14 06:25:10.227471 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Jan 14 06:25:10.227483 kernel: Policy zone: DMA32 Jan 14 06:25:10.227495 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 06:25:10.227508 kernel: software IO TLB: area num 16. Jan 14 06:25:10.227520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 14 06:25:10.227533 kernel: Kernel/User page tables isolation: enabled Jan 14 06:25:10.227549 kernel: ftrace: allocating 40128 entries in 157 pages Jan 14 06:25:10.227561 kernel: ftrace: allocated 157 pages with 5 groups Jan 14 06:25:10.227574 kernel: Dynamic Preempt: voluntary Jan 14 06:25:10.227586 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 06:25:10.227599 kernel: rcu: RCU event tracing is enabled. Jan 14 06:25:10.227612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 14 06:25:10.227624 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 06:25:10.227671 kernel: Rude variant of Tasks RCU enabled. Jan 14 06:25:10.227692 kernel: Tracing variant of Tasks RCU enabled. Jan 14 06:25:10.227704 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 06:25:10.227717 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 14 06:25:10.227729 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:25:10.227742 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:25:10.227754 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 14 06:25:10.227766 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 14 06:25:10.227783 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 06:25:10.227805 kernel: Console: colour VGA+ 80x25 Jan 14 06:25:10.227821 kernel: printk: legacy console [tty0] enabled Jan 14 06:25:10.227835 kernel: printk: legacy console [ttyS0] enabled Jan 14 06:25:10.227855 kernel: ACPI: Core revision 20240827 Jan 14 06:25:10.227869 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 06:25:10.227882 kernel: x2apic enabled Jan 14 06:25:10.227894 kernel: APIC: Switched APIC routing to: physical x2apic Jan 14 06:25:10.227908 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 06:25:10.227925 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 14 06:25:10.227939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 14 06:25:10.227952 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 14 06:25:10.227964 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 14 06:25:10.227977 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 06:25:10.227993 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 06:25:10.228006 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 14 06:25:10.228019 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 14 06:25:10.228031 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 14 06:25:10.228045 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 14 06:25:10.228057 kernel: MDS: Mitigation: Clear CPU buffers Jan 14 06:25:10.228070 kernel: MMIO Stale Data: Unknown: No mitigations Jan 14 06:25:10.228082 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 14 06:25:10.228095 kernel: active return thunk: its_return_thunk Jan 14 06:25:10.228107 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 14 06:25:10.228124 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 06:25:10.228137 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 06:25:10.228150 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 06:25:10.228162 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 06:25:10.228175 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 14 06:25:10.228188 kernel: Freeing SMP alternatives memory: 32K Jan 14 06:25:10.228200 kernel: pid_max: default: 32768 minimum: 301 Jan 14 06:25:10.228213 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 06:25:10.228225 kernel: landlock: Up and running. Jan 14 06:25:10.228238 kernel: SELinux: Initializing. Jan 14 06:25:10.228255 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 06:25:10.228268 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 14 06:25:10.228280 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 14 06:25:10.228293 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 14 06:25:10.228306 kernel: signal: max sigframe size: 1776 Jan 14 06:25:10.228319 kernel: rcu: Hierarchical SRCU implementation. Jan 14 06:25:10.228333 kernel: rcu: Max phase no-delay instances is 400. Jan 14 06:25:10.228346 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Jan 14 06:25:10.228363 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 06:25:10.228376 kernel: smp: Bringing up secondary CPUs ... Jan 14 06:25:10.228389 kernel: smpboot: x86: Booting SMP configuration: Jan 14 06:25:10.228402 kernel: .... node #0, CPUs: #1 Jan 14 06:25:10.228415 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 06:25:10.228427 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 14 06:25:10.228441 kernel: Memory: 1912060K/2096616K available (14336K kernel code, 2445K rwdata, 31644K rodata, 15536K init, 2500K bss, 178540K reserved, 0K cma-reserved) Jan 14 06:25:10.228458 kernel: devtmpfs: initialized Jan 14 06:25:10.228471 kernel: x86/mm: Memory block size: 128MB Jan 14 06:25:10.228484 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 06:25:10.228497 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 14 06:25:10.228510 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 06:25:10.228523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 06:25:10.228536 kernel: audit: initializing netlink subsys (disabled) Jan 14 06:25:10.228553 kernel: audit: type=2000 audit(1768371907.589:1): state=initialized audit_enabled=0 res=1 Jan 14 06:25:10.228566 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 06:25:10.228578 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 06:25:10.228591 kernel: cpuidle: using governor menu Jan 14 06:25:10.228604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 06:25:10.228617 kernel: dca service started, version 1.12.1 Jan 14 06:25:10.228657 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Jan 14 06:25:10.228673 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 14 06:25:10.228692 kernel: PCI: Using configuration type 1 for base access Jan 14 06:25:10.228705 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 06:25:10.228718 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 06:25:10.228731 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 06:25:10.228744 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 06:25:10.228758 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 06:25:10.228770 kernel: ACPI: Added _OSI(Module Device) Jan 14 06:25:10.228787 kernel: ACPI: Added _OSI(Processor Device) Jan 14 06:25:10.228800 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 06:25:10.228813 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 06:25:10.228826 kernel: ACPI: Interpreter enabled Jan 14 06:25:10.228839 kernel: ACPI: PM: (supports S0 S5) Jan 14 06:25:10.228852 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 06:25:10.228865 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 06:25:10.228881 kernel: PCI: Using E820 reservations for host bridge windows Jan 14 06:25:10.228894 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 14 06:25:10.228907 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 06:25:10.229217 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 06:25:10.229437 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 06:25:10.229711 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 06:25:10.229738 kernel: PCI host bridge to bus 0000:00 Jan 14 06:25:10.229965 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 14 06:25:10.230160 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 14 06:25:10.230352 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 14 06:25:10.230542 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 14 06:25:10.230773 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 14 06:25:10.230965 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 14 06:25:10.231160 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 06:25:10.231432 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 14 06:25:10.231707 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Jan 14 06:25:10.231920 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Jan 14 06:25:10.232135 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Jan 14 06:25:10.232354 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Jan 14 06:25:10.232560 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 14 06:25:10.232831 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.233050 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Jan 14 06:25:10.233269 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:25:10.233477 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 06:25:10.233730 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:25:10.233963 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.234316 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Jan 14 06:25:10.236822 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:25:10.237054 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 06:25:10.237270 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:25:10.237526 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.237793 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Jan 14 06:25:10.238008 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:25:10.238224 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 06:25:10.238442 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:25:10.240725 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.240955 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Jan 14 06:25:10.241170 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:25:10.241381 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 06:25:10.241591 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:25:10.241867 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.242087 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Jan 14 06:25:10.242301 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:25:10.242508 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 06:25:10.242759 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:25:10.242981 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.243198 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Jan 14 06:25:10.243406 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:25:10.243613 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 06:25:10.243918 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:25:10.244144 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.244351 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Jan 14 06:25:10.244565 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:25:10.244801 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 06:25:10.245010 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:25:10.245236 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 06:25:10.245450 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Jan 14 06:25:10.246974 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:25:10.247201 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 06:25:10.247422 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:25:10.249331 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jan 14 06:25:10.249698 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Jan 14 06:25:10.250020 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Jan 14 06:25:10.250291 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Jan 14 06:25:10.250521 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Jan 14 06:25:10.250785 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jan 14 06:25:10.251014 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Jan 14 06:25:10.251224 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Jan 14 06:25:10.251433 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Jan 14 06:25:10.253701 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 14 06:25:10.253933 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 14 06:25:10.254181 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 14 06:25:10.254401 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Jan 14 06:25:10.254612 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Jan 14 06:25:10.254902 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 14 06:25:10.255111 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Jan 14 06:25:10.255333 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Jan 14 06:25:10.255550 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Jan 14 06:25:10.256825 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:25:10.257050 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 06:25:10.257284 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:25:10.257513 kernel: pci_bus 0000:02: extended config space not accessible Jan 14 06:25:10.257794 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Jan 14 06:25:10.258015 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Jan 14 06:25:10.258230 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:25:10.258462 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 06:25:10.258711 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Jan 14 06:25:10.258923 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:25:10.259162 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 06:25:10.259376 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Jan 14 06:25:10.259583 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:25:10.259843 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:25:10.260133 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:25:10.260343 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:25:10.260551 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:25:10.260822 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:25:10.260851 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 14 06:25:10.260865 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 14 06:25:10.260878 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 14 06:25:10.260891 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 14 06:25:10.260905 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 14 06:25:10.260926 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 14 06:25:10.260941 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 14 06:25:10.260959 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 14 06:25:10.260973 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 14 06:25:10.260986 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 14 06:25:10.260999 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 14 06:25:10.261012 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 14 06:25:10.261025 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 14 06:25:10.261039 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 14 06:25:10.261056 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 14 06:25:10.261070 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 14 06:25:10.261083 kernel: iommu: Default domain type: Translated Jan 14 06:25:10.261096 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 06:25:10.261110 kernel: PCI: Using ACPI for IRQ routing Jan 14 06:25:10.261123 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 14 06:25:10.261136 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 14 06:25:10.261149 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 14 06:25:10.261371 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 14 06:25:10.261577 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 14 06:25:10.261833 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 14 06:25:10.261853 kernel: vgaarb: loaded Jan 14 06:25:10.261867 kernel: clocksource: Switched to clocksource kvm-clock Jan 14 06:25:10.261880 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 06:25:10.261901 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 06:25:10.261914 kernel: pnp: PnP ACPI init Jan 14 06:25:10.263059 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 14 06:25:10.263725 kernel: pnp: PnP ACPI: found 5 devices Jan 14 06:25:10.263740 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 06:25:10.263754 kernel: NET: Registered PF_INET protocol family Jan 14 06:25:10.263767 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 06:25:10.263788 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 14 06:25:10.263801 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 06:25:10.263815 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 06:25:10.263828 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 14 06:25:10.263841 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 14 06:25:10.263855 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 06:25:10.263868 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 14 06:25:10.263886 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 06:25:10.263899 kernel: NET: Registered PF_XDP protocol family Jan 14 06:25:10.264115 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 14 06:25:10.264325 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 06:25:10.264533 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 06:25:10.264796 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 06:25:10.265012 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 06:25:10.265219 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 06:25:10.265425 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 06:25:10.265651 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 06:25:10.265877 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 06:25:10.266084 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 06:25:10.266289 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 06:25:10.266502 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 06:25:10.266737 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 06:25:10.266946 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 06:25:10.267153 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 06:25:10.267359 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 06:25:10.267570 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 14 06:25:10.267846 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 14 06:25:10.268054 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 14 06:25:10.268271 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 06:25:10.268477 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 14 06:25:10.268741 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:25:10.268952 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 14 06:25:10.269159 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 06:25:10.269373 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 14 06:25:10.269579 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:25:10.269818 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 14 06:25:10.270025 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 06:25:10.270230 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 14 06:25:10.270444 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:25:10.270686 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 14 06:25:10.270897 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 06:25:10.271103 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 14 06:25:10.271309 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:25:10.271524 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 14 06:25:10.271759 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 06:25:10.271966 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 14 06:25:10.272171 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:25:10.272376 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 14 06:25:10.272581 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 06:25:10.272836 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 14 06:25:10.273044 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:25:10.273255 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 14 06:25:10.273466 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 06:25:10.273703 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 14 06:25:10.273911 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:25:10.274117 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 14 06:25:10.274323 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 06:25:10.274536 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 14 06:25:10.274781 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:25:10.274978 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 14 06:25:10.275170 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 14 06:25:10.275360 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 14 06:25:10.275550 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 14 06:25:10.275791 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 14 06:25:10.275992 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 14 06:25:10.276199 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 06:25:10.276395 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 14 06:25:10.276598 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 14 06:25:10.276843 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 14 06:25:10.277077 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 14 06:25:10.277276 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 14 06:25:10.277470 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 14 06:25:10.277716 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 14 06:25:10.277917 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 14 06:25:10.278114 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 14 06:25:10.278348 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 14 06:25:10.278546 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 14 06:25:10.278788 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 14 06:25:10.279015 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 14 06:25:10.279212 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 14 06:25:10.279415 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 14 06:25:10.279628 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 14 06:25:10.279869 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 14 06:25:10.280072 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 14 06:25:10.280294 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 14 06:25:10.280499 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 14 06:25:10.280749 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 14 06:25:10.280970 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 14 06:25:10.281168 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 14 06:25:10.281362 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 14 06:25:10.281383 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 14 06:25:10.281397 kernel: PCI: CLS 0 bytes, default 64 Jan 14 06:25:10.281418 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 06:25:10.281432 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 14 06:25:10.281446 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 06:25:10.281460 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 14 06:25:10.281474 kernel: Initialise system trusted keyrings Jan 14 06:25:10.281488 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 14 06:25:10.281501 kernel: Key type asymmetric registered Jan 14 06:25:10.281519 kernel: Asymmetric key parser 'x509' registered Jan 14 06:25:10.281533 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 14 06:25:10.281547 kernel: io scheduler mq-deadline registered Jan 14 06:25:10.281560 kernel: io scheduler kyber registered Jan 14 06:25:10.281574 kernel: io scheduler bfq registered Jan 14 06:25:10.281809 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 14 06:25:10.282019 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 14 06:25:10.282234 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.282441 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 14 06:25:10.282683 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 14 06:25:10.282895 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.283102 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 14 06:25:10.283322 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 14 06:25:10.283529 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.283820 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 14 06:25:10.284029 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 14 06:25:10.284236 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.284451 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 14 06:25:10.284696 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 14 06:25:10.284906 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.285114 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 14 06:25:10.285321 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 14 06:25:10.285536 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.285772 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 14 06:25:10.285980 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 14 06:25:10.286188 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.286394 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 14 06:25:10.286607 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 14 06:25:10.286852 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 06:25:10.286875 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 06:25:10.286889 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 14 06:25:10.286903 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 14 06:25:10.286917 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 06:25:10.286938 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 06:25:10.286953 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 14 06:25:10.286966 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 14 06:25:10.286980 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 14 06:25:10.286994 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 14 06:25:10.287209 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 14 06:25:10.287420 kernel: rtc_cmos 00:03: registered as rtc0 Jan 14 06:25:10.287627 kernel: rtc_cmos 00:03: setting system clock to 2026-01-14T06:25:08 UTC (1768371908) Jan 14 06:25:10.287854 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 14 06:25:10.287875 kernel: intel_pstate: CPU model not supported Jan 14 06:25:10.287889 kernel: NET: Registered PF_INET6 protocol family Jan 14 06:25:10.287902 kernel: Segment Routing with IPv6 Jan 14 06:25:10.287916 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 06:25:10.287929 kernel: NET: Registered PF_PACKET protocol family Jan 14 06:25:10.287950 kernel: Key type dns_resolver registered Jan 14 06:25:10.287964 kernel: IPI shorthand broadcast: enabled Jan 14 06:25:10.287978 kernel: sched_clock: Marking stable (2053004232, 214188788)->(2381281840, -114088820) Jan 14 06:25:10.287991 kernel: registered taskstats version 1 Jan 14 06:25:10.288009 kernel: Loading compiled-in X.509 certificates Jan 14 06:25:10.288023 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 447f89388dd1db788444733bd6b00fe574646ee9' Jan 14 06:25:10.288037 kernel: Demotion targets for Node 0: null Jan 14 06:25:10.288054 kernel: Key type .fscrypt registered Jan 14 06:25:10.288068 kernel: Key type fscrypt-provisioning registered Jan 14 06:25:10.288081 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 06:25:10.288095 kernel: ima: Allocated hash algorithm: sha1 Jan 14 06:25:10.288108 kernel: ima: No architecture policies found Jan 14 06:25:10.288122 kernel: clk: Disabling unused clocks Jan 14 06:25:10.288135 kernel: Freeing unused kernel image (initmem) memory: 15536K Jan 14 06:25:10.288154 kernel: Write protecting the kernel read-only data: 47104k Jan 14 06:25:10.288168 kernel: Freeing unused kernel image (rodata/data gap) memory: 1124K Jan 14 06:25:10.288181 kernel: Run /init as init process Jan 14 06:25:10.288195 kernel: with arguments: Jan 14 06:25:10.288209 kernel: /init Jan 14 06:25:10.288222 kernel: with environment: Jan 14 06:25:10.288235 kernel: HOME=/ Jan 14 06:25:10.288253 kernel: TERM=linux Jan 14 06:25:10.288266 kernel: ACPI: bus type USB registered Jan 14 06:25:10.288280 kernel: usbcore: registered new interface driver usbfs Jan 14 06:25:10.288293 kernel: usbcore: registered new interface driver hub Jan 14 06:25:10.288307 kernel: usbcore: registered new device driver usb Jan 14 06:25:10.288518 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 06:25:10.288771 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 14 06:25:10.288992 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 06:25:10.289205 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 14 06:25:10.289418 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 14 06:25:10.289652 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 14 06:25:10.289919 kernel: hub 1-0:1.0: USB hub found Jan 14 06:25:10.290148 kernel: hub 1-0:1.0: 4 ports detected Jan 14 06:25:10.290403 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 06:25:10.290687 kernel: hub 2-0:1.0: USB hub found Jan 14 06:25:10.290918 kernel: hub 2-0:1.0: 4 ports detected Jan 14 06:25:10.290939 kernel: SCSI subsystem initialized Jan 14 06:25:10.290953 kernel: libata version 3.00 loaded. Jan 14 06:25:10.291164 kernel: ahci 0000:00:1f.2: version 3.0 Jan 14 06:25:10.291192 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 14 06:25:10.291397 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 14 06:25:10.291605 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 14 06:25:10.291840 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 14 06:25:10.292084 kernel: scsi host0: ahci Jan 14 06:25:10.292324 kernel: scsi host1: ahci Jan 14 06:25:10.292555 kernel: scsi host2: ahci Jan 14 06:25:10.292823 kernel: scsi host3: ahci Jan 14 06:25:10.293045 kernel: scsi host4: ahci Jan 14 06:25:10.293263 kernel: scsi host5: ahci Jan 14 06:25:10.293291 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 35 lpm-pol 1 Jan 14 06:25:10.293312 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 35 lpm-pol 1 Jan 14 06:25:10.293326 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 35 lpm-pol 1 Jan 14 06:25:10.293340 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 35 lpm-pol 1 Jan 14 06:25:10.293353 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 35 lpm-pol 1 Jan 14 06:25:10.293367 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 35 lpm-pol 1 Jan 14 06:25:10.293630 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 06:25:10.293712 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293727 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293741 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 06:25:10.293754 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293768 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293782 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293801 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 14 06:25:10.293819 kernel: usbcore: registered new interface driver usbhid Jan 14 06:25:10.294054 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 14 06:25:10.294076 kernel: usbhid: USB HID core driver Jan 14 06:25:10.294276 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 14 06:25:10.294297 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 14 06:25:10.294558 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 14 06:25:10.294587 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 06:25:10.294601 kernel: GPT:25804799 != 125829119 Jan 14 06:25:10.294614 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 06:25:10.294628 kernel: GPT:25804799 != 125829119 Jan 14 06:25:10.294692 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 06:25:10.294707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 06:25:10.294727 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 06:25:10.294741 kernel: device-mapper: uevent: version 1.0.3 Jan 14 06:25:10.294755 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 06:25:10.294769 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Jan 14 06:25:10.294787 kernel: raid6: sse2x4 gen() 14416 MB/s Jan 14 06:25:10.294801 kernel: raid6: sse2x2 gen() 9868 MB/s Jan 14 06:25:10.294815 kernel: raid6: sse2x1 gen() 10296 MB/s Jan 14 06:25:10.294833 kernel: raid6: using algorithm sse2x4 gen() 14416 MB/s Jan 14 06:25:10.294847 kernel: raid6: .... xor() 8531 MB/s, rmw enabled Jan 14 06:25:10.294861 kernel: raid6: using ssse3x2 recovery algorithm Jan 14 06:25:10.294875 kernel: xor: automatically using best checksumming function avx Jan 14 06:25:10.294889 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 06:25:10.294903 kernel: BTRFS: device fsid 2c8f2baf-3f08-4641-b860-b6dd41142f72 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (193) Jan 14 06:25:10.294923 kernel: BTRFS info (device dm-0): first mount of filesystem 2c8f2baf-3f08-4641-b860-b6dd41142f72 Jan 14 06:25:10.294942 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:25:10.294956 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 06:25:10.294970 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 06:25:10.294993 kernel: loop: module loaded Jan 14 06:25:10.295007 kernel: loop0: detected capacity change from 0 to 100536 Jan 14 06:25:10.295020 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 06:25:10.295036 systemd[1]: Successfully made /usr/ read-only. Jan 14 06:25:10.295067 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:25:10.295082 systemd[1]: Detected virtualization kvm. Jan 14 06:25:10.295097 systemd[1]: Detected architecture x86-64. Jan 14 06:25:10.295120 systemd[1]: Running in initrd. Jan 14 06:25:10.295134 systemd[1]: No hostname configured, using default hostname. Jan 14 06:25:10.295149 systemd[1]: Hostname set to . Jan 14 06:25:10.295168 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:25:10.295182 systemd[1]: Queued start job for default target initrd.target. Jan 14 06:25:10.295197 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:25:10.295212 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:25:10.295226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:25:10.295242 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 06:25:10.295260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:25:10.295276 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 06:25:10.295291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 06:25:10.295306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:25:10.295321 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:25:10.295335 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:25:10.295354 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:25:10.295369 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:25:10.295383 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:25:10.295397 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:25:10.295412 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:25:10.295426 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:25:10.295441 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:25:10.295460 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 06:25:10.295474 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 06:25:10.295489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:25:10.295504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:25:10.295518 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:25:10.295533 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:25:10.295547 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 06:25:10.295566 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 06:25:10.295581 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:25:10.295595 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 06:25:10.295611 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 06:25:10.295625 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 06:25:10.295665 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:25:10.295687 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:25:10.295702 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:25:10.295717 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 06:25:10.295732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:25:10.295751 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 06:25:10.295767 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 06:25:10.295821 systemd-journald[331]: Collecting audit messages is enabled. Jan 14 06:25:10.295857 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 06:25:10.295873 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 06:25:10.295887 kernel: Bridge firewalling registered Jan 14 06:25:10.295902 systemd-journald[331]: Journal started Jan 14 06:25:10.295926 systemd-journald[331]: Runtime Journal (/run/log/journal/b95ecfa2c9b2404696c1482e429e584f) is 4.7M, max 37.7M, 33M free. Jan 14 06:25:10.263537 systemd-modules-load[332]: Inserted module 'br_netfilter' Jan 14 06:25:10.328686 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:25:10.328727 kernel: audit: type=1130 audit(1768371910.325:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.335514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:25:10.345615 kernel: audit: type=1130 audit(1768371910.333:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.345668 kernel: audit: type=1130 audit(1768371910.339:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.340497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:25:10.352076 kernel: audit: type=1130 audit(1768371910.345:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.351811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 06:25:10.355813 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:25:10.357360 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:25:10.362695 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:25:10.381439 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:25:10.389661 kernel: audit: type=1130 audit(1768371910.382:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.384813 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:25:10.396660 kernel: audit: type=1334 audit(1768371910.383:7): prog-id=6 op=LOAD Jan 14 06:25:10.383000 audit: BPF prog-id=6 op=LOAD Jan 14 06:25:10.399181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:25:10.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.399305 systemd-tmpfiles[351]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 06:25:10.409716 kernel: audit: type=1130 audit(1768371910.400:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.408434 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 06:25:10.413232 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:25:10.421614 kernel: audit: type=1130 audit(1768371910.414:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.417002 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:25:10.428729 kernel: audit: type=1130 audit(1768371910.421:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.442036 dracut-cmdline[368]: dracut-109 Jan 14 06:25:10.447463 dracut-cmdline[368]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=87e02bed36f442f7915376555bbec9abc9601b29a9acaf045382608b676e1943 Jan 14 06:25:10.463576 systemd-resolved[364]: Positive Trust Anchors: Jan 14 06:25:10.463592 systemd-resolved[364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:25:10.463598 systemd-resolved[364]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:25:10.463666 systemd-resolved[364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:25:10.501204 systemd-resolved[364]: Defaulting to hostname 'linux'. Jan 14 06:25:10.503607 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:25:10.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.504434 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:25:10.580698 kernel: Loading iSCSI transport class v2.0-870. Jan 14 06:25:10.597676 kernel: iscsi: registered transport (tcp) Jan 14 06:25:10.625128 kernel: iscsi: registered transport (qla4xxx) Jan 14 06:25:10.625223 kernel: QLogic iSCSI HBA Driver Jan 14 06:25:10.657896 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:25:10.681289 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:25:10.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.685064 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:25:10.743611 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 06:25:10.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.747481 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 06:25:10.749938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 06:25:10.784165 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:25:10.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.786000 audit: BPF prog-id=7 op=LOAD Jan 14 06:25:10.786000 audit: BPF prog-id=8 op=LOAD Jan 14 06:25:10.788850 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:25:10.822789 systemd-udevd[604]: Using default interface naming scheme 'v257'. Jan 14 06:25:10.838141 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:25:10.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.843372 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 06:25:10.876784 dracut-pre-trigger[670]: rd.md=0: removing MD RAID activation Jan 14 06:25:10.879690 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:25:10.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.881000 audit: BPF prog-id=9 op=LOAD Jan 14 06:25:10.883839 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:25:10.914915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:25:10.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.918424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:25:10.942964 systemd-networkd[714]: lo: Link UP Jan 14 06:25:10.942975 systemd-networkd[714]: lo: Gained carrier Jan 14 06:25:10.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:10.946786 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:25:10.947551 systemd[1]: Reached target network.target - Network. Jan 14 06:25:11.070884 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:25:11.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.075755 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 06:25:11.201841 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 06:25:11.214215 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 06:25:11.228320 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:25:11.250962 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 06:25:11.254585 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 06:25:11.281975 disk-uuid[769]: Primary Header is updated. Jan 14 06:25:11.281975 disk-uuid[769]: Secondary Entries is updated. Jan 14 06:25:11.281975 disk-uuid[769]: Secondary Header is updated. Jan 14 06:25:11.343665 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 06:25:11.378042 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 14 06:25:11.386419 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:25:11.386433 systemd-networkd[714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:25:11.388748 systemd-networkd[714]: eth0: Link UP Jan 14 06:25:11.389528 systemd-networkd[714]: eth0: Gained carrier Jan 14 06:25:11.389543 systemd-networkd[714]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:25:11.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.390959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:25:11.391145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:25:11.393206 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:25:11.400565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:25:11.411658 kernel: AES CTR mode by8 optimization enabled Jan 14 06:25:11.412287 systemd-networkd[714]: eth0: DHCPv4 address 10.230.48.98/30, gateway 10.230.48.97 acquired from 10.230.48.97 Jan 14 06:25:11.548390 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 06:25:11.574179 kernel: kauditd_printk_skb: 13 callbacks suppressed Jan 14 06:25:11.574215 kernel: audit: type=1130 audit(1768371911.570:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.572280 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:25:11.578960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:25:11.580497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:25:11.583445 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 06:25:11.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.585147 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:25:11.592899 kernel: audit: type=1130 audit(1768371911.585:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.615875 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:25:11.621915 kernel: audit: type=1130 audit(1768371911.615:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:11.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.334032 disk-uuid[770]: Warning: The kernel is still using the old partition table. Jan 14 06:25:12.334032 disk-uuid[770]: The new table will be used at the next reboot or after you Jan 14 06:25:12.334032 disk-uuid[770]: run partprobe(8) or kpartx(8) Jan 14 06:25:12.334032 disk-uuid[770]: The operation has completed successfully. Jan 14 06:25:12.344372 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 06:25:12.344540 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 06:25:12.355751 kernel: audit: type=1130 audit(1768371912.345:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.355785 kernel: audit: type=1131 audit(1768371912.345:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.348812 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 06:25:12.397789 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (854) Jan 14 06:25:12.407190 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:25:12.407250 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:25:12.414374 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:25:12.414414 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:25:12.422730 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:25:12.423709 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 06:25:12.430701 kernel: audit: type=1130 audit(1768371912.423:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.425801 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 06:25:12.621042 ignition[873]: Ignition 2.24.0 Jan 14 06:25:12.621066 ignition[873]: Stage: fetch-offline Jan 14 06:25:12.621167 ignition[873]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:12.621190 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:12.624014 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:25:12.630652 kernel: audit: type=1130 audit(1768371912.624:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.621330 ignition[873]: parsed url from cmdline: "" Jan 14 06:25:12.627813 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 06:25:12.621336 ignition[873]: no config URL provided Jan 14 06:25:12.621345 ignition[873]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 06:25:12.621362 ignition[873]: no config at "/usr/lib/ignition/user.ign" Jan 14 06:25:12.621370 ignition[873]: failed to fetch config: resource requires networking Jan 14 06:25:12.621825 ignition[873]: Ignition finished successfully Jan 14 06:25:12.654694 ignition[879]: Ignition 2.24.0 Jan 14 06:25:12.654710 ignition[879]: Stage: fetch Jan 14 06:25:12.654933 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:12.654950 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:12.655090 ignition[879]: parsed url from cmdline: "" Jan 14 06:25:12.655097 ignition[879]: no config URL provided Jan 14 06:25:12.655114 ignition[879]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 06:25:12.655128 ignition[879]: no config at "/usr/lib/ignition/user.ign" Jan 14 06:25:12.655261 ignition[879]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 06:25:12.655276 ignition[879]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 06:25:12.655293 ignition[879]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 06:25:12.673754 ignition[879]: GET result: OK Jan 14 06:25:12.674669 ignition[879]: parsing config with SHA512: 7781649edaf71507072d49ca5ace678252eee86c373a4eb79ab1da8051c7ad8673d44a2cf8abbce0ac1abbdc740566fe034f70d0553aee45c7b62d08151c5467 Jan 14 06:25:12.682214 unknown[879]: fetched base config from "system" Jan 14 06:25:12.683192 unknown[879]: fetched base config from "system" Jan 14 06:25:12.683206 unknown[879]: fetched user config from "openstack" Jan 14 06:25:12.684247 ignition[879]: fetch: fetch complete Jan 14 06:25:12.684255 ignition[879]: fetch: fetch passed Jan 14 06:25:12.684326 ignition[879]: Ignition finished successfully Jan 14 06:25:12.686515 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 06:25:12.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.692814 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 06:25:12.694247 kernel: audit: type=1130 audit(1768371912.687:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.719987 ignition[885]: Ignition 2.24.0 Jan 14 06:25:12.720010 ignition[885]: Stage: kargs Jan 14 06:25:12.720203 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:12.720220 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:12.721100 ignition[885]: kargs: kargs passed Jan 14 06:25:12.724829 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 06:25:12.721169 ignition[885]: Ignition finished successfully Jan 14 06:25:12.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.729893 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 06:25:12.734677 kernel: audit: type=1130 audit(1768371912.727:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.761893 ignition[891]: Ignition 2.24.0 Jan 14 06:25:12.761915 ignition[891]: Stage: disks Jan 14 06:25:12.762134 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:12.762151 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:12.763377 ignition[891]: disks: disks passed Jan 14 06:25:12.763442 ignition[891]: Ignition finished successfully Jan 14 06:25:12.772933 kernel: audit: type=1130 audit(1768371912.766:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.766022 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 06:25:12.767982 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 06:25:12.773624 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 06:25:12.775010 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:25:12.776464 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:25:12.777979 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:25:12.780532 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 06:25:12.833723 systemd-fsck[899]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 06:25:12.837055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 06:25:12.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:12.839752 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 06:25:12.975691 kernel: EXT4-fs (vda9): mounted filesystem 06cc0495-6f26-4e6e-84ba-33c1e3a1737c r/w with ordered data mode. Quota mode: none. Jan 14 06:25:12.976837 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 06:25:12.978779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 06:25:12.982019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:25:12.984614 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 06:25:12.987782 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 06:25:12.992497 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 06:25:12.994702 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 06:25:12.996017 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:25:13.000360 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 06:25:13.005661 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (907) Jan 14 06:25:13.009107 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:25:13.009163 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:25:13.010897 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 06:25:13.016794 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:25:13.016833 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:25:13.020098 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:25:13.097661 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:13.263738 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 06:25:13.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:13.266445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 06:25:13.267996 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 06:25:13.292661 kernel: BTRFS info (device vda6): last unmount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:25:13.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:13.308988 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 06:25:13.329688 ignition[1011]: INFO : Ignition 2.24.0 Jan 14 06:25:13.329688 ignition[1011]: INFO : Stage: mount Jan 14 06:25:13.331788 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:13.331788 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:13.331788 ignition[1011]: INFO : mount: mount passed Jan 14 06:25:13.331788 ignition[1011]: INFO : Ignition finished successfully Jan 14 06:25:13.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:13.333471 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 06:25:13.385260 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 06:25:13.396802 systemd-networkd[714]: eth0: Gained IPv6LL Jan 14 06:25:14.129680 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:14.903098 systemd-networkd[714]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c18:24:19ff:fee6:3062/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c18:24:19ff:fee6:3062/64 assigned by NDisc. Jan 14 06:25:14.903110 systemd-networkd[714]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 06:25:16.137695 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:20.146671 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:20.157227 coreos-metadata[909]: Jan 14 06:25:20.157 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:25:20.183207 coreos-metadata[909]: Jan 14 06:25:20.183 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 06:25:20.202594 coreos-metadata[909]: Jan 14 06:25:20.202 INFO Fetch successful Jan 14 06:25:20.204379 coreos-metadata[909]: Jan 14 06:25:20.204 INFO wrote hostname srv-i1yja.gb1.brightbox.com to /sysroot/etc/hostname Jan 14 06:25:20.206740 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 06:25:20.221583 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:25:20.221656 kernel: audit: type=1130 audit(1768371920.208:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:20.221682 kernel: audit: type=1131 audit(1768371920.208:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:20.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:20.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:20.206945 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 06:25:20.212523 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 06:25:20.239030 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 06:25:20.263674 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Jan 14 06:25:20.267205 kernel: BTRFS info (device vda6): first mount of filesystem 95daf8b3-0a1b-42db-86ec-02d0f02f4a01 Jan 14 06:25:20.267262 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 06:25:20.275474 kernel: BTRFS info (device vda6): turning on async discard Jan 14 06:25:20.275559 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 06:25:20.278348 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 06:25:20.312406 ignition[1044]: INFO : Ignition 2.24.0 Jan 14 06:25:20.312406 ignition[1044]: INFO : Stage: files Jan 14 06:25:20.314128 ignition[1044]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:20.314128 ignition[1044]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:20.314128 ignition[1044]: DEBUG : files: compiled without relabeling support, skipping Jan 14 06:25:20.317168 ignition[1044]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 06:25:20.317168 ignition[1044]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 06:25:20.319606 ignition[1044]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 06:25:20.320601 ignition[1044]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 06:25:20.320601 ignition[1044]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 06:25:20.320396 unknown[1044]: wrote ssh authorized keys file for user: core Jan 14 06:25:20.325935 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 06:25:20.325935 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 14 06:25:20.542175 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 06:25:20.929203 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:25:20.930834 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 06:25:20.939477 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 14 06:25:21.257922 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 06:25:22.810272 ignition[1044]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 14 06:25:22.810272 ignition[1044]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 06:25:22.813401 ignition[1044]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:25:22.814570 ignition[1044]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 06:25:22.814570 ignition[1044]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 06:25:22.814570 ignition[1044]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 06:25:22.814570 ignition[1044]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 06:25:22.820012 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:25:22.820012 ignition[1044]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 06:25:22.820012 ignition[1044]: INFO : files: files passed Jan 14 06:25:22.820012 ignition[1044]: INFO : Ignition finished successfully Jan 14 06:25:22.831554 kernel: audit: type=1130 audit(1768371922.822:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.820239 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 06:25:22.826876 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 06:25:22.832839 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 06:25:22.850256 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 06:25:22.850450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 06:25:22.865890 kernel: audit: type=1130 audit(1768371922.851:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.865949 kernel: audit: type=1131 audit(1768371922.851:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.874211 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:25:22.874211 initrd-setup-root-after-ignition[1075]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:25:22.877074 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 06:25:22.878440 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:25:22.885535 kernel: audit: type=1130 audit(1768371922.878:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.879752 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 06:25:22.887840 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 06:25:22.945422 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 06:25:22.957026 kernel: audit: type=1130 audit(1768371922.945:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.957071 kernel: audit: type=1131 audit(1768371922.946:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.945599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 06:25:22.947223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 06:25:22.957739 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 06:25:22.959595 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 06:25:22.961058 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 06:25:22.996049 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:25:23.002584 kernel: audit: type=1130 audit(1768371922.996:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:22.999808 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 06:25:23.030746 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 06:25:23.030973 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:25:23.032907 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:25:23.034702 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 06:25:23.036062 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 06:25:23.042704 kernel: audit: type=1131 audit(1768371923.036:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.036328 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 06:25:23.042656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 06:25:23.043507 systemd[1]: Stopped target basic.target - Basic System. Jan 14 06:25:23.044951 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 06:25:23.046197 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 06:25:23.047680 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 06:25:23.049088 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 06:25:23.050789 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 06:25:23.052239 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 06:25:23.053854 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 06:25:23.055215 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 06:25:23.056900 systemd[1]: Stopped target swap.target - Swaps. Jan 14 06:25:23.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.058120 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 06:25:23.058299 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 06:25:23.059973 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:25:23.060961 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:25:23.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.062281 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 06:25:23.062481 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:25:23.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.063984 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 06:25:23.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.064231 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 06:25:23.066033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 06:25:23.066274 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 06:25:23.068008 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 06:25:23.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.068249 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 06:25:23.071910 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 06:25:23.072704 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 06:25:23.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.072953 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:25:23.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.077183 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 06:25:23.079795 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 06:25:23.080060 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:25:23.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.082349 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 06:25:23.082594 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:25:23.085698 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 06:25:23.087764 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 06:25:23.104065 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 06:25:23.105691 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 06:25:23.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.119414 ignition[1099]: INFO : Ignition 2.24.0 Jan 14 06:25:23.120451 ignition[1099]: INFO : Stage: umount Jan 14 06:25:23.121693 ignition[1099]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 06:25:23.121693 ignition[1099]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 06:25:23.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.126525 ignition[1099]: INFO : umount: umount passed Jan 14 06:25:23.126525 ignition[1099]: INFO : Ignition finished successfully Jan 14 06:25:23.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.122562 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 06:25:23.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.125267 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 06:25:23.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.125440 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 06:25:23.126876 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 06:25:23.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.126967 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 06:25:23.128158 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 06:25:23.128220 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 06:25:23.129490 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 06:25:23.129561 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 06:25:23.130791 systemd[1]: Stopped target network.target - Network. Jan 14 06:25:23.132023 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 06:25:23.132088 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 06:25:23.133502 systemd[1]: Stopped target paths.target - Path Units. Jan 14 06:25:23.134753 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 06:25:23.138721 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:25:23.139999 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 06:25:23.141522 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 06:25:23.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.143122 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 06:25:23.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.143205 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 06:25:23.144453 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 06:25:23.144510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 06:25:23.146012 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 06:25:23.146063 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:25:23.147707 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 06:25:23.147787 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 06:25:23.149052 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 06:25:23.149119 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 06:25:23.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.151224 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 06:25:23.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.152264 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 06:25:23.156496 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 06:25:23.156683 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 06:25:23.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.160022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 06:25:23.160243 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 06:25:23.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.163562 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 06:25:23.163765 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 06:25:23.167416 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 06:25:23.167594 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 06:25:23.177377 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 06:25:23.176000 audit: BPF prog-id=9 op=UNLOAD Jan 14 06:25:23.177000 audit: BPF prog-id=6 op=UNLOAD Jan 14 06:25:23.178478 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 06:25:23.178568 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:25:23.181225 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 06:25:23.182664 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 06:25:23.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.182744 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 06:25:23.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.185849 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 06:25:23.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.185917 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:25:23.187719 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 06:25:23.187793 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 06:25:23.190291 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:25:23.202924 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 06:25:23.203172 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:25:23.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.209442 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 06:25:23.209545 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 06:25:23.212121 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 06:25:23.212190 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:25:23.213628 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 06:25:23.215000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.214736 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 06:25:23.216844 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 06:25:23.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.216950 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 06:25:23.219180 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 06:25:23.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.219278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 06:25:23.225840 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 06:25:23.226597 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 06:25:23.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.226689 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:25:23.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.229095 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 06:25:23.229164 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:25:23.232079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 06:25:23.232168 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:25:23.234792 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 06:25:23.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.241187 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 06:25:23.251445 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 06:25:23.251623 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 06:25:23.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:23.253343 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 06:25:23.255830 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 06:25:23.280458 systemd[1]: Switching root. Jan 14 06:25:23.322094 systemd-journald[331]: Journal stopped Jan 14 06:25:25.032355 systemd-journald[331]: Received SIGTERM from PID 1 (systemd). Jan 14 06:25:25.032533 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 06:25:25.032585 kernel: SELinux: policy capability open_perms=1 Jan 14 06:25:25.032619 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 06:25:25.032677 kernel: SELinux: policy capability always_check_network=0 Jan 14 06:25:25.032704 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 06:25:25.032740 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 06:25:25.032765 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 06:25:25.032795 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 06:25:25.032830 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 06:25:25.032856 systemd[1]: Successfully loaded SELinux policy in 79.407ms. Jan 14 06:25:25.032906 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.072ms. Jan 14 06:25:25.032936 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 06:25:25.032964 systemd[1]: Detected virtualization kvm. Jan 14 06:25:25.033004 systemd[1]: Detected architecture x86-64. Jan 14 06:25:25.033026 systemd[1]: Detected first boot. Jan 14 06:25:25.033149 systemd[1]: Hostname set to . Jan 14 06:25:25.033199 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 06:25:25.033227 zram_generator::config[1142]: No configuration found. Jan 14 06:25:25.033282 kernel: Guest personality initialized and is inactive Jan 14 06:25:25.033304 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 14 06:25:25.033331 kernel: Initialized host personality Jan 14 06:25:25.033356 kernel: NET: Registered PF_VSOCK protocol family Jan 14 06:25:25.034732 systemd[1]: Populated /etc with preset unit settings. Jan 14 06:25:25.034783 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 06:25:25.034812 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 06:25:25.034840 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 06:25:25.034880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 06:25:25.034904 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 06:25:25.035450 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 06:25:25.035478 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 06:25:25.035514 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 06:25:25.035536 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 06:25:25.035563 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 06:25:25.035590 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 06:25:25.035651 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 06:25:25.035680 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 06:25:25.035714 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 06:25:25.035752 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 06:25:25.035828 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 06:25:25.035863 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 06:25:25.035885 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 06:25:25.035911 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 06:25:25.035955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 06:25:25.035980 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 06:25:25.036014 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 06:25:25.036206 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 06:25:25.036233 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 06:25:25.036273 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 06:25:25.036308 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 06:25:25.036342 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 06:25:25.036995 systemd[1]: Reached target slices.target - Slice Units. Jan 14 06:25:25.037039 systemd[1]: Reached target swap.target - Swaps. Jan 14 06:25:25.037059 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 06:25:25.037098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 06:25:25.037125 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 06:25:25.037146 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 06:25:25.037182 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 06:25:25.037204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 06:25:25.037230 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 06:25:25.037256 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 06:25:25.037291 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 06:25:25.037319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 06:25:25.037340 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 06:25:25.037372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 06:25:25.037404 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 06:25:25.037426 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 06:25:25.037452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:25.037474 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 06:25:25.037494 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 06:25:25.037514 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 06:25:25.037546 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 06:25:25.037576 systemd[1]: Reached target machines.target - Containers. Jan 14 06:25:25.037604 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 06:25:25.037626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:25:25.037701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 06:25:25.037735 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 06:25:25.037757 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:25:25.037790 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:25:25.037813 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:25:25.037833 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 06:25:25.037859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:25:25.037889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 06:25:25.037910 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 06:25:25.037944 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 06:25:25.037966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 06:25:25.038845 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 06:25:25.038882 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:25:25.038929 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 06:25:25.038957 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 06:25:25.038980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 06:25:25.039006 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 06:25:25.039033 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 06:25:25.039055 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 06:25:25.039081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:25.039116 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 06:25:25.039138 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 06:25:25.039159 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 06:25:25.039185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 06:25:25.039207 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 06:25:25.039250 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 06:25:25.039283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 06:25:25.039324 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 06:25:25.039346 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 06:25:25.040578 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:25:25.040602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:25:25.041783 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:25:25.041834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:25:25.041865 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:25:25.041893 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:25:25.041915 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 06:25:25.041944 kernel: fuse: init (API version 7.41) Jan 14 06:25:25.041971 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 06:25:25.042006 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 06:25:25.042028 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 06:25:25.042049 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 06:25:25.042075 kernel: ACPI: bus type drm_connector registered Jan 14 06:25:25.042096 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 06:25:25.042131 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 06:25:25.042159 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 06:25:25.042195 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 06:25:25.042253 systemd-journald[1230]: Collecting audit messages is enabled. Jan 14 06:25:25.042323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:25:25.042348 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:25:25.042375 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 06:25:25.042404 systemd-journald[1230]: Journal started Jan 14 06:25:25.042448 systemd-journald[1230]: Runtime Journal (/run/log/journal/b95ecfa2c9b2404696c1482e429e584f) is 4.7M, max 37.7M, 33M free. Jan 14 06:25:24.670000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 06:25:24.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.846000 audit: BPF prog-id=14 op=UNLOAD Jan 14 06:25:24.846000 audit: BPF prog-id=13 op=UNLOAD Jan 14 06:25:24.850000 audit: BPF prog-id=15 op=LOAD Jan 14 06:25:24.850000 audit: BPF prog-id=16 op=LOAD Jan 14 06:25:24.850000 audit: BPF prog-id=17 op=LOAD Jan 14 06:25:24.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:24.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.029000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 06:25:25.029000 audit[1230]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffca3369fa0 a2=4000 a3=0 items=0 ppid=1 pid=1230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:25.029000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 06:25:24.562188 systemd[1]: Queued start job for default target multi-user.target. Jan 14 06:25:24.573138 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 06:25:24.574033 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 06:25:25.048683 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:25:25.054658 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 06:25:25.059667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:25:25.062668 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 06:25:25.068666 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 06:25:25.071659 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 06:25:25.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.078063 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 06:25:25.080512 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:25:25.080904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:25:25.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.083046 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 06:25:25.085837 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 06:25:25.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.108079 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 06:25:25.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.110455 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 06:25:25.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.119869 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 06:25:25.126661 kernel: loop1: detected capacity change from 0 to 111560 Jan 14 06:25:25.127027 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 06:25:25.132849 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 06:25:25.137736 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 06:25:25.157680 systemd-journald[1230]: Time spent on flushing to /var/log/journal/b95ecfa2c9b2404696c1482e429e584f is 98.273ms for 1295 entries. Jan 14 06:25:25.157680 systemd-journald[1230]: System Journal (/var/log/journal/b95ecfa2c9b2404696c1482e429e584f) is 8M, max 588.1M, 580.1M free. Jan 14 06:25:25.300009 systemd-journald[1230]: Received client request to flush runtime journal. Jan 14 06:25:25.300087 kernel: loop2: detected capacity change from 0 to 8 Jan 14 06:25:25.300139 kernel: loop3: detected capacity change from 0 to 229808 Jan 14 06:25:25.300176 kernel: kauditd_printk_skb: 85 callbacks suppressed Jan 14 06:25:25.300216 kernel: audit: type=1130 audit(1768371925.249:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.300267 kernel: audit: type=1334 audit(1768371925.250:132): prog-id=18 op=LOAD Jan 14 06:25:25.300308 kernel: audit: type=1334 audit(1768371925.250:133): prog-id=19 op=LOAD Jan 14 06:25:25.300343 kernel: audit: type=1334 audit(1768371925.250:134): prog-id=20 op=LOAD Jan 14 06:25:25.300377 kernel: loop4: detected capacity change from 0 to 50784 Jan 14 06:25:25.300420 kernel: audit: type=1334 audit(1768371925.257:135): prog-id=21 op=LOAD Jan 14 06:25:25.300453 kernel: audit: type=1130 audit(1768371925.292:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.250000 audit: BPF prog-id=18 op=LOAD Jan 14 06:25:25.250000 audit: BPF prog-id=19 op=LOAD Jan 14 06:25:25.250000 audit: BPF prog-id=20 op=LOAD Jan 14 06:25:25.257000 audit: BPF prog-id=21 op=LOAD Jan 14 06:25:25.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.205938 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 06:25:25.318358 kernel: audit: type=1130 audit(1768371925.303:137): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.318454 kernel: audit: type=1334 audit(1768371925.313:138): prog-id=22 op=LOAD Jan 14 06:25:25.318492 kernel: audit: type=1334 audit(1768371925.313:139): prog-id=23 op=LOAD Jan 14 06:25:25.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.313000 audit: BPF prog-id=22 op=LOAD Jan 14 06:25:25.313000 audit: BPF prog-id=23 op=LOAD Jan 14 06:25:25.249151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 06:25:25.253932 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 06:25:25.259884 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 06:25:25.270940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 06:25:25.291011 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 06:25:25.302061 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 06:25:25.325687 kernel: audit: type=1334 audit(1768371925.313:140): prog-id=24 op=LOAD Jan 14 06:25:25.313000 audit: BPF prog-id=24 op=LOAD Jan 14 06:25:25.322577 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 06:25:25.325000 audit: BPF prog-id=25 op=LOAD Jan 14 06:25:25.325000 audit: BPF prog-id=26 op=LOAD Jan 14 06:25:25.325000 audit: BPF prog-id=27 op=LOAD Jan 14 06:25:25.327945 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 06:25:25.342692 kernel: loop5: detected capacity change from 0 to 111560 Jan 14 06:25:25.366667 kernel: loop6: detected capacity change from 0 to 8 Jan 14 06:25:25.365028 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 14 06:25:25.365053 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Jan 14 06:25:25.376973 kernel: loop7: detected capacity change from 0 to 229808 Jan 14 06:25:25.378151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 06:25:25.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.400666 kernel: loop1: detected capacity change from 0 to 50784 Jan 14 06:25:25.411322 systemd-nsresourced[1301]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 06:25:25.414949 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 06:25:25.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.419375 (sd-merge)[1305]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-openstack.raw'. Jan 14 06:25:25.428370 (sd-merge)[1305]: Merged extensions into '/usr'. Jan 14 06:25:25.434952 systemd[1]: Reload requested from client PID 1260 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 06:25:25.435121 systemd[1]: Reloading... Jan 14 06:25:25.587685 zram_generator::config[1348]: No configuration found. Jan 14 06:25:25.649462 systemd-oomd[1293]: No swap; memory pressure usage will be degraded Jan 14 06:25:25.654353 systemd-resolved[1294]: Positive Trust Anchors: Jan 14 06:25:25.654739 systemd-resolved[1294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 06:25:25.654826 systemd-resolved[1294]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 06:25:25.654940 systemd-resolved[1294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 06:25:25.679063 systemd-resolved[1294]: Using system hostname 'srv-i1yja.gb1.brightbox.com'. Jan 14 06:25:25.916280 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 06:25:25.916559 systemd[1]: Reloading finished in 480 ms. Jan 14 06:25:25.945176 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 06:25:25.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.947408 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 06:25:25.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.948610 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 06:25:25.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.949994 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 06:25:25.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:25.955348 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 06:25:25.958079 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 06:25:25.967832 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 06:25:25.975846 systemd[1]: Starting ensure-sysext.service... Jan 14 06:25:25.982905 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 06:25:25.993000 audit: BPF prog-id=28 op=LOAD Jan 14 06:25:25.993000 audit: BPF prog-id=18 op=UNLOAD Jan 14 06:25:25.993000 audit: BPF prog-id=29 op=LOAD Jan 14 06:25:25.993000 audit: BPF prog-id=30 op=LOAD Jan 14 06:25:25.993000 audit: BPF prog-id=19 op=UNLOAD Jan 14 06:25:25.993000 audit: BPF prog-id=20 op=UNLOAD Jan 14 06:25:25.996000 audit: BPF prog-id=31 op=LOAD Jan 14 06:25:25.996000 audit: BPF prog-id=25 op=UNLOAD Jan 14 06:25:25.997000 audit: BPF prog-id=32 op=LOAD Jan 14 06:25:25.997000 audit: BPF prog-id=33 op=LOAD Jan 14 06:25:25.997000 audit: BPF prog-id=26 op=UNLOAD Jan 14 06:25:25.997000 audit: BPF prog-id=27 op=UNLOAD Jan 14 06:25:25.998000 audit: BPF prog-id=34 op=LOAD Jan 14 06:25:26.002000 audit: BPF prog-id=15 op=UNLOAD Jan 14 06:25:26.002000 audit: BPF prog-id=35 op=LOAD Jan 14 06:25:26.004000 audit: BPF prog-id=36 op=LOAD Jan 14 06:25:26.004000 audit: BPF prog-id=16 op=UNLOAD Jan 14 06:25:26.004000 audit: BPF prog-id=17 op=UNLOAD Jan 14 06:25:26.005000 audit: BPF prog-id=37 op=LOAD Jan 14 06:25:26.005000 audit: BPF prog-id=22 op=UNLOAD Jan 14 06:25:26.006000 audit: BPF prog-id=38 op=LOAD Jan 14 06:25:26.006000 audit: BPF prog-id=39 op=LOAD Jan 14 06:25:26.006000 audit: BPF prog-id=23 op=UNLOAD Jan 14 06:25:26.006000 audit: BPF prog-id=24 op=UNLOAD Jan 14 06:25:26.010000 audit: BPF prog-id=40 op=LOAD Jan 14 06:25:26.010000 audit: BPF prog-id=21 op=UNLOAD Jan 14 06:25:26.015125 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 06:25:26.016480 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 06:25:26.025332 systemd[1]: Reload requested from client PID 1405 ('systemctl') (unit ensure-sysext.service)... Jan 14 06:25:26.025480 systemd[1]: Reloading... Jan 14 06:25:26.049952 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 06:25:26.050012 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 06:25:26.050429 systemd-tmpfiles[1406]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 06:25:26.052484 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 14 06:25:26.052587 systemd-tmpfiles[1406]: ACLs are not supported, ignoring. Jan 14 06:25:26.079990 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:25:26.080009 systemd-tmpfiles[1406]: Skipping /boot Jan 14 06:25:26.110498 systemd-tmpfiles[1406]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 06:25:26.110530 systemd-tmpfiles[1406]: Skipping /boot Jan 14 06:25:26.137725 zram_generator::config[1440]: No configuration found. Jan 14 06:25:26.424148 systemd[1]: Reloading finished in 398 ms. Jan 14 06:25:26.438714 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 06:25:26.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.441000 audit: BPF prog-id=41 op=LOAD Jan 14 06:25:26.442000 audit: BPF prog-id=40 op=UNLOAD Jan 14 06:25:26.443000 audit: BPF prog-id=42 op=LOAD Jan 14 06:25:26.443000 audit: BPF prog-id=28 op=UNLOAD Jan 14 06:25:26.443000 audit: BPF prog-id=43 op=LOAD Jan 14 06:25:26.443000 audit: BPF prog-id=44 op=LOAD Jan 14 06:25:26.443000 audit: BPF prog-id=29 op=UNLOAD Jan 14 06:25:26.443000 audit: BPF prog-id=30 op=UNLOAD Jan 14 06:25:26.444000 audit: BPF prog-id=45 op=LOAD Jan 14 06:25:26.444000 audit: BPF prog-id=37 op=UNLOAD Jan 14 06:25:26.444000 audit: BPF prog-id=46 op=LOAD Jan 14 06:25:26.444000 audit: BPF prog-id=47 op=LOAD Jan 14 06:25:26.444000 audit: BPF prog-id=38 op=UNLOAD Jan 14 06:25:26.444000 audit: BPF prog-id=39 op=UNLOAD Jan 14 06:25:26.447000 audit: BPF prog-id=48 op=LOAD Jan 14 06:25:26.447000 audit: BPF prog-id=34 op=UNLOAD Jan 14 06:25:26.447000 audit: BPF prog-id=49 op=LOAD Jan 14 06:25:26.447000 audit: BPF prog-id=50 op=LOAD Jan 14 06:25:26.447000 audit: BPF prog-id=35 op=UNLOAD Jan 14 06:25:26.447000 audit: BPF prog-id=36 op=UNLOAD Jan 14 06:25:26.448000 audit: BPF prog-id=51 op=LOAD Jan 14 06:25:26.459000 audit: BPF prog-id=31 op=UNLOAD Jan 14 06:25:26.459000 audit: BPF prog-id=52 op=LOAD Jan 14 06:25:26.459000 audit: BPF prog-id=53 op=LOAD Jan 14 06:25:26.459000 audit: BPF prog-id=32 op=UNLOAD Jan 14 06:25:26.459000 audit: BPF prog-id=33 op=UNLOAD Jan 14 06:25:26.464386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 06:25:26.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.476917 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:25:26.479952 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 06:25:26.493472 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 06:25:26.498000 audit: BPF prog-id=54 op=LOAD Jan 14 06:25:26.498000 audit: BPF prog-id=55 op=LOAD Jan 14 06:25:26.497515 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 06:25:26.499000 audit: BPF prog-id=7 op=UNLOAD Jan 14 06:25:26.499000 audit: BPF prog-id=8 op=UNLOAD Jan 14 06:25:26.501806 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 06:25:26.506206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 06:25:26.512429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.512729 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:25:26.515392 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 06:25:26.527458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 06:25:26.531309 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 06:25:26.532821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:25:26.533110 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:25:26.533259 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:25:26.533400 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.540486 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.541874 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:25:26.542154 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:25:26.542394 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:25:26.542534 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:25:26.542707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.549544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.550970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 06:25:26.553502 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 06:25:26.555034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 06:25:26.555303 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 06:25:26.555444 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 06:25:26.555625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 06:25:26.585000 audit[1504]: SYSTEM_BOOT pid=1504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.585906 systemd[1]: Finished ensure-sysext.service. Jan 14 06:25:26.589000 audit: BPF prog-id=56 op=LOAD Jan 14 06:25:26.594241 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 06:25:26.605788 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 06:25:26.608037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 06:25:26.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.610701 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 06:25:26.643347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 06:25:26.645883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 06:25:26.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.649269 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 06:25:26.663465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 06:25:26.663802 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 06:25:26.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.668579 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 06:25:26.669813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 06:25:26.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.673005 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 06:25:26.697000 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 06:25:26.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.718741 systemd-udevd[1502]: Using default interface naming scheme 'v257'. Jan 14 06:25:26.724686 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 06:25:26.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:26.727195 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 06:25:26.730000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 06:25:26.730000 audit[1539]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffb89afe60 a2=420 a3=0 items=0 ppid=1498 pid=1539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:26.730000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:25:26.732145 augenrules[1539]: No rules Jan 14 06:25:26.734198 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:25:26.734912 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:25:26.789966 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 06:25:26.813505 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 06:25:26.839747 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 06:25:26.841725 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 06:25:27.036035 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 06:25:27.099347 systemd-networkd[1557]: lo: Link UP Jan 14 06:25:27.099837 systemd-networkd[1557]: lo: Gained carrier Jan 14 06:25:27.102286 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 06:25:27.104012 systemd[1]: Reached target network.target - Network. Jan 14 06:25:27.109816 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 06:25:27.116380 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 06:25:27.159224 systemd-networkd[1557]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:25:27.161679 systemd-networkd[1557]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 06:25:27.165123 systemd-networkd[1557]: eth0: Link UP Jan 14 06:25:27.166023 systemd-networkd[1557]: eth0: Gained carrier Jan 14 06:25:27.166076 systemd-networkd[1557]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 06:25:27.181812 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 06:25:27.180168 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 06:25:27.187739 systemd-networkd[1557]: eth0: DHCPv4 address 10.230.48.98/30, gateway 10.230.48.97 acquired from 10.230.48.97 Jan 14 06:25:27.191043 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 14 06:25:27.215695 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 14 06:25:27.262696 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 06:25:27.270682 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 06:25:27.286673 kernel: ACPI: button: Power Button [PWRF] Jan 14 06:25:27.307564 ldconfig[1500]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 06:25:27.315122 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 06:25:27.320534 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 06:25:27.330876 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 06:25:27.335923 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 14 06:25:27.340639 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 14 06:25:27.354606 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 06:25:27.357150 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 06:25:27.358951 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 06:25:27.359912 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 06:25:27.361866 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 14 06:25:27.363379 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 06:25:27.365142 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 06:25:27.366335 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 06:25:27.367823 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 06:25:27.369716 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 06:25:27.370553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 06:25:27.370598 systemd[1]: Reached target paths.target - Path Units. Jan 14 06:25:27.371714 systemd[1]: Reached target timers.target - Timer Units. Jan 14 06:25:27.374539 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 06:25:27.378435 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 06:25:27.386455 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 06:25:27.389945 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 06:25:27.392129 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 06:25:27.403144 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 06:25:27.405363 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 06:25:27.408572 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 06:25:27.413958 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 06:25:27.416709 systemd[1]: Reached target basic.target - Basic System. Jan 14 06:25:27.417411 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:25:27.417463 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 06:25:27.421793 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 06:25:27.428948 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 06:25:27.435543 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 06:25:27.441367 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 06:25:27.446877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 06:25:27.452157 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 06:25:27.464828 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 06:25:27.467749 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 14 06:25:27.471924 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 06:25:27.474689 jq[1599]: false Jan 14 06:25:27.480904 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 06:25:27.486076 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing passwd entry cache Jan 14 06:25:27.486086 oslogin_cache_refresh[1601]: Refreshing passwd entry cache Jan 14 06:25:27.491432 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 06:25:27.504903 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 06:25:27.513458 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 06:25:27.514618 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 06:25:27.515310 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 06:25:27.518977 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 06:25:27.531620 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting users, quitting Jan 14 06:25:27.531611 oslogin_cache_refresh[1601]: Failure getting users, quitting Jan 14 06:25:27.531832 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:25:27.531832 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Refreshing group entry cache Jan 14 06:25:27.531674 oslogin_cache_refresh[1601]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 14 06:25:27.531768 oslogin_cache_refresh[1601]: Refreshing group entry cache Jan 14 06:25:27.532481 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 06:25:27.541845 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Failure getting groups, quitting Jan 14 06:25:27.541845 google_oslogin_nss_cache[1601]: oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:25:27.536099 oslogin_cache_refresh[1601]: Failure getting groups, quitting Jan 14 06:25:27.541362 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 06:25:27.536114 oslogin_cache_refresh[1601]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 14 06:25:27.542764 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 06:25:27.543536 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 14 06:25:27.543970 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 14 06:25:27.579683 update_engine[1608]: I20260114 06:25:27.577483 1608 main.cc:92] Flatcar Update Engine starting Jan 14 06:25:27.588065 jq[1609]: true Jan 14 06:25:27.603346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 06:25:27.608719 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 06:25:27.617664 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:27.627128 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 06:25:27.630308 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 06:25:27.655738 extend-filesystems[1600]: Found /dev/vda6 Jan 14 06:25:27.683676 extend-filesystems[1600]: Found /dev/vda9 Jan 14 06:25:27.695789 tar[1613]: linux-amd64/LICENSE Jan 14 06:25:27.701006 extend-filesystems[1600]: Checking size of /dev/vda9 Jan 14 06:25:27.703710 tar[1613]: linux-amd64/helm Jan 14 06:25:27.721035 jq[1627]: true Jan 14 06:25:27.730244 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 06:25:27.732826 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 06:25:27.753341 extend-filesystems[1600]: Resized partition /dev/vda9 Jan 14 06:25:27.763406 dbus-daemon[1597]: [system] SELinux support is enabled Jan 14 06:25:27.763909 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 06:25:27.788744 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 06:25:27.788797 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 06:25:27.789916 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 06:25:27.789961 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 06:25:27.803692 extend-filesystems[1656]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 06:25:27.827347 dbus-daemon[1597]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.4' (uid=244 pid=1557 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 14 06:25:27.837395 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 14138363 blocks Jan 14 06:25:27.834546 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 14 06:25:27.837188 systemd[1]: Started update-engine.service - Update Engine. Jan 14 06:25:27.839186 update_engine[1608]: I20260114 06:25:27.838934 1608 update_check_scheduler.cc:74] Next update check in 8m19s Jan 14 06:25:27.870822 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 06:25:27.996043 bash[1677]: Updated "/home/core/.ssh/authorized_keys" Jan 14 06:25:27.996856 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 06:25:28.000893 systemd[1]: Starting sshkeys.service... Jan 14 06:25:28.116615 kernel: EXT4-fs (vda9): resized filesystem to 14138363 Jan 14 06:25:28.133601 extend-filesystems[1656]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 06:25:28.133601 extend-filesystems[1656]: old_desc_blocks = 1, new_desc_blocks = 7 Jan 14 06:25:28.133601 extend-filesystems[1656]: The filesystem on /dev/vda9 is now 14138363 (4k) blocks long. Jan 14 06:25:28.296908 extend-filesystems[1600]: Resized filesystem in /dev/vda9 Jan 14 06:25:28.137537 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.192080895Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244080947Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.908µs" Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244134902Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244207308Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244241395Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244476647Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244501514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244620967Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.244640566Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.258037131Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.301258 containerd[1636]: time="2026-01-14T06:25:28.258081194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:25:28.137979 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.258105872Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.258120066Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.258421744Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.258443030Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.258626254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259155190Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259222332Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259251101Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259296326Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259761289Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.259880112Z" level=info msg="metadata content store policy set" policy=shared Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.263650459Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 06:25:28.305076 containerd[1636]: time="2026-01-14T06:25:28.263713709Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263807901Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263838837Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263858629Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263895211Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263927781Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263943416Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.263972494Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264000889Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264028691Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264046277Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264061027Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264079107Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 06:25:28.305537 containerd[1636]: time="2026-01-14T06:25:28.264267549Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264306974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264331389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264348366Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264371622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264389269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264412353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264464533Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264494090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264533735Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264551542Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.264596712Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.267791786Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.267851837Z" level=info msg="Start snapshots syncer" Jan 14 06:25:28.306078 containerd[1636]: time="2026-01-14T06:25:28.267902644Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 06:25:28.306564 containerd[1636]: time="2026-01-14T06:25:28.272018989Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 06:25:28.306564 containerd[1636]: time="2026-01-14T06:25:28.272086957Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272153335Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272321987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272365112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272385024Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272405709Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272432549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272450358Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272466902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272482890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272526489Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272578218Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272619237Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 06:25:28.306855 containerd[1636]: time="2026-01-14T06:25:28.272634326Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272681007Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272697087Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272711252Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272744262Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272776107Z" level=info msg="runtime interface created" Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272786567Z" level=info msg="created NRI interface" Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272800889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272817957Z" level=info msg="Connect containerd service" Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.272853773Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 06:25:28.307291 containerd[1636]: time="2026-01-14T06:25:28.278987086Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 06:25:28.308068 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 06:25:28.351834 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 06:25:28.358136 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 06:25:28.381001 systemd-networkd[1557]: eth0: Gained IPv6LL Jan 14 06:25:28.385268 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 14 06:25:28.392832 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 06:25:28.395223 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 06:25:28.402714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:25:28.407525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 06:25:28.421976 systemd-logind[1607]: Watching system buttons on /dev/input/event3 (Power Button) Jan 14 06:25:28.422016 systemd-logind[1607]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 06:25:28.422339 systemd-logind[1607]: New seat seat0. Jan 14 06:25:28.427037 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 06:25:28.462663 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:28.586348 sshd_keygen[1648]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 06:25:28.589112 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 06:25:28.666794 locksmithd[1659]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 06:25:28.698348 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 06:25:28.711149 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 06:25:28.745807 containerd[1636]: time="2026-01-14T06:25:28.745756110Z" level=info msg="Start subscribing containerd event" Jan 14 06:25:28.745945 containerd[1636]: time="2026-01-14T06:25:28.745844446Z" level=info msg="Start recovering state" Jan 14 06:25:28.746061 containerd[1636]: time="2026-01-14T06:25:28.746036034Z" level=info msg="Start event monitor" Jan 14 06:25:28.746145 containerd[1636]: time="2026-01-14T06:25:28.746069502Z" level=info msg="Start cni network conf syncer for default" Jan 14 06:25:28.746145 containerd[1636]: time="2026-01-14T06:25:28.746086870Z" level=info msg="Start streaming server" Jan 14 06:25:28.746145 containerd[1636]: time="2026-01-14T06:25:28.746106599Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 06:25:28.746145 containerd[1636]: time="2026-01-14T06:25:28.746118671Z" level=info msg="runtime interface starting up..." Jan 14 06:25:28.746145 containerd[1636]: time="2026-01-14T06:25:28.746131637Z" level=info msg="starting plugins..." Jan 14 06:25:28.746495 containerd[1636]: time="2026-01-14T06:25:28.746161194Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 06:25:28.753256 containerd[1636]: time="2026-01-14T06:25:28.751964744Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 06:25:28.753256 containerd[1636]: time="2026-01-14T06:25:28.752050248Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 06:25:28.753256 containerd[1636]: time="2026-01-14T06:25:28.752154767Z" level=info msg="containerd successfully booted in 0.569735s" Jan 14 06:25:28.752861 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 06:25:28.762024 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 06:25:28.762382 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 06:25:28.770769 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 06:25:28.807127 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 14 06:25:28.807548 dbus-daemon[1597]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 14 06:25:28.808259 dbus-daemon[1597]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.9' (uid=0 pid=1658 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 14 06:25:28.819409 systemd[1]: Starting polkit.service - Authorization Manager... Jan 14 06:25:28.821771 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 06:25:28.829225 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 06:25:28.834394 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 06:25:28.837105 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 06:25:28.938328 polkitd[1732]: Started polkitd version 126 Jan 14 06:25:28.954483 polkitd[1732]: Loading rules from directory /etc/polkit-1/rules.d Jan 14 06:25:28.956965 polkitd[1732]: Loading rules from directory /run/polkit-1/rules.d Jan 14 06:25:28.957040 polkitd[1732]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 06:25:28.957847 polkitd[1732]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 14 06:25:28.957895 polkitd[1732]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 14 06:25:28.959128 polkitd[1732]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 14 06:25:28.961074 polkitd[1732]: Finished loading, compiling and executing 2 rules Jan 14 06:25:28.963058 systemd[1]: Started polkit.service - Authorization Manager. Jan 14 06:25:28.963106 dbus-daemon[1597]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 14 06:25:28.965655 polkitd[1732]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 14 06:25:28.993864 systemd-hostnamed[1658]: Hostname set to (static) Jan 14 06:25:29.000420 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 14 06:25:29.004257 systemd-networkd[1557]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c18:24:19ff:fee6:3062/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c18:24:19ff:fee6:3062/64 assigned by NDisc. Jan 14 06:25:29.004388 systemd-networkd[1557]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 14 06:25:29.005624 tar[1613]: linux-amd64/README.md Jan 14 06:25:29.026761 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 06:25:29.666969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:25:29.683411 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:25:29.729368 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:29.729485 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:30.165715 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 14 06:25:30.346322 kubelet[1753]: E0114 06:25:30.346263 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:25:30.349510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:25:30.349873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:25:30.350612 systemd[1]: kubelet.service: Consumed 1.056s CPU time, 270.8M memory peak. Jan 14 06:25:31.746139 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:31.746406 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:33.959337 login[1734]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:33.980470 systemd-logind[1607]: New session 1 of user core. Jan 14 06:25:33.984422 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 06:25:33.986781 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 06:25:34.019013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 06:25:34.023222 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 06:25:34.045120 (systemd)[1769]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:34.050305 systemd-logind[1607]: New session 2 of user core. Jan 14 06:25:34.241094 systemd[1769]: Queued start job for default target default.target. Jan 14 06:25:34.261540 systemd[1769]: Created slice app.slice - User Application Slice. Jan 14 06:25:34.261920 systemd[1769]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 06:25:34.261951 systemd[1769]: Reached target paths.target - Paths. Jan 14 06:25:34.262061 systemd[1769]: Reached target timers.target - Timers. Jan 14 06:25:34.263056 login[1735]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:34.266524 systemd[1769]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 06:25:34.269063 systemd[1769]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 06:25:34.274965 systemd-logind[1607]: New session 3 of user core. Jan 14 06:25:34.298996 systemd[1769]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 06:25:34.299324 systemd[1769]: Reached target sockets.target - Sockets. Jan 14 06:25:34.300897 systemd[1769]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 06:25:34.301061 systemd[1769]: Reached target basic.target - Basic System. Jan 14 06:25:34.301157 systemd[1769]: Reached target default.target - Main User Target. Jan 14 06:25:34.301224 systemd[1769]: Startup finished in 243ms. Jan 14 06:25:34.301335 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 06:25:34.313002 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 06:25:34.315828 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 06:25:35.764397 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:35.764616 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 06:25:35.772940 coreos-metadata[1596]: Jan 14 06:25:35.772 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:25:35.780669 coreos-metadata[1688]: Jan 14 06:25:35.779 WARN failed to locate config-drive, using the metadata service API instead Jan 14 06:25:35.799721 coreos-metadata[1596]: Jan 14 06:25:35.799 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 06:25:35.800832 coreos-metadata[1688]: Jan 14 06:25:35.800 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 06:25:35.805116 coreos-metadata[1596]: Jan 14 06:25:35.804 INFO Fetch failed with 404: resource not found Jan 14 06:25:35.805377 coreos-metadata[1596]: Jan 14 06:25:35.805 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 06:25:35.805557 coreos-metadata[1596]: Jan 14 06:25:35.805 INFO Fetch successful Jan 14 06:25:35.805665 coreos-metadata[1596]: Jan 14 06:25:35.805 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 06:25:35.819295 coreos-metadata[1596]: Jan 14 06:25:35.819 INFO Fetch successful Jan 14 06:25:35.819477 coreos-metadata[1596]: Jan 14 06:25:35.819 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 06:25:35.825625 coreos-metadata[1688]: Jan 14 06:25:35.825 INFO Fetch successful Jan 14 06:25:35.825835 coreos-metadata[1688]: Jan 14 06:25:35.825 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 06:25:35.836906 coreos-metadata[1596]: Jan 14 06:25:35.836 INFO Fetch successful Jan 14 06:25:35.836906 coreos-metadata[1596]: Jan 14 06:25:35.836 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 06:25:35.855603 coreos-metadata[1596]: Jan 14 06:25:35.855 INFO Fetch successful Jan 14 06:25:35.856033 coreos-metadata[1596]: Jan 14 06:25:35.855 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 06:25:35.875345 coreos-metadata[1596]: Jan 14 06:25:35.875 INFO Fetch successful Jan 14 06:25:35.893482 coreos-metadata[1688]: Jan 14 06:25:35.893 INFO Fetch successful Jan 14 06:25:35.910401 unknown[1688]: wrote ssh authorized keys file for user: core Jan 14 06:25:35.914715 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 06:25:35.916342 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 06:25:35.940183 update-ssh-keys[1814]: Updated "/home/core/.ssh/authorized_keys" Jan 14 06:25:35.942620 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 06:25:35.945803 systemd[1]: Finished sshkeys.service. Jan 14 06:25:35.947301 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 06:25:35.948207 systemd[1]: Startup finished in 3.242s (kernel) + 13.956s (initrd) + 12.287s (userspace) = 29.486s. Jan 14 06:25:37.490557 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 06:25:37.492928 systemd[1]: Started sshd@0-10.230.48.98:22-20.161.92.111:34560.service - OpenSSH per-connection server daemon (20.161.92.111:34560). Jan 14 06:25:38.023219 sshd[1818]: Accepted publickey for core from 20.161.92.111 port 34560 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:38.025551 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:38.033532 systemd-logind[1607]: New session 4 of user core. Jan 14 06:25:38.040924 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 06:25:38.396345 systemd[1]: Started sshd@1-10.230.48.98:22-20.161.92.111:34574.service - OpenSSH per-connection server daemon (20.161.92.111:34574). Jan 14 06:25:38.926414 sshd[1825]: Accepted publickey for core from 20.161.92.111 port 34574 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:38.928792 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:38.938600 systemd-logind[1607]: New session 5 of user core. Jan 14 06:25:38.944934 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 06:25:39.206845 sshd[1829]: Connection closed by 20.161.92.111 port 34574 Jan 14 06:25:39.208107 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Jan 14 06:25:39.214587 systemd[1]: sshd@1-10.230.48.98:22-20.161.92.111:34574.service: Deactivated successfully. Jan 14 06:25:39.217300 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 06:25:39.218592 systemd-logind[1607]: Session 5 logged out. Waiting for processes to exit. Jan 14 06:25:39.220945 systemd-logind[1607]: Removed session 5. Jan 14 06:25:39.308619 systemd[1]: Started sshd@2-10.230.48.98:22-20.161.92.111:34582.service - OpenSSH per-connection server daemon (20.161.92.111:34582). Jan 14 06:25:39.824830 sshd[1835]: Accepted publickey for core from 20.161.92.111 port 34582 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:39.827584 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:39.836728 systemd-logind[1607]: New session 6 of user core. Jan 14 06:25:39.853958 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 06:25:40.101439 sshd[1839]: Connection closed by 20.161.92.111 port 34582 Jan 14 06:25:40.102584 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Jan 14 06:25:40.109298 systemd-logind[1607]: Session 6 logged out. Waiting for processes to exit. Jan 14 06:25:40.110606 systemd[1]: sshd@2-10.230.48.98:22-20.161.92.111:34582.service: Deactivated successfully. Jan 14 06:25:40.113861 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 06:25:40.114919 systemd-logind[1607]: Removed session 6. Jan 14 06:25:40.216308 systemd[1]: Started sshd@3-10.230.48.98:22-20.161.92.111:34584.service - OpenSSH per-connection server daemon (20.161.92.111:34584). Jan 14 06:25:40.600745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 06:25:40.604452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:25:40.739572 sshd[1845]: Accepted publickey for core from 20.161.92.111 port 34584 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:40.741928 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:40.750486 systemd-logind[1607]: New session 7 of user core. Jan 14 06:25:40.763081 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 06:25:40.829355 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:25:40.840258 (kubelet)[1858]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:25:40.913695 kubelet[1858]: E0114 06:25:40.912387 1858 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:25:40.918224 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:25:40.918487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:25:40.920563 systemd[1]: kubelet.service: Consumed 261ms CPU time, 110.6M memory peak. Jan 14 06:25:41.021080 sshd[1852]: Connection closed by 20.161.92.111 port 34584 Jan 14 06:25:41.023027 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Jan 14 06:25:41.028511 systemd[1]: sshd@3-10.230.48.98:22-20.161.92.111:34584.service: Deactivated successfully. Jan 14 06:25:41.031159 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 06:25:41.032406 systemd-logind[1607]: Session 7 logged out. Waiting for processes to exit. Jan 14 06:25:41.035115 systemd-logind[1607]: Removed session 7. Jan 14 06:25:41.137555 systemd[1]: Started sshd@4-10.230.48.98:22-20.161.92.111:52128.service - OpenSSH per-connection server daemon (20.161.92.111:52128). Jan 14 06:25:41.654173 sshd[1870]: Accepted publickey for core from 20.161.92.111 port 52128 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:41.656311 sshd-session[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:41.667054 systemd-logind[1607]: New session 8 of user core. Jan 14 06:25:41.685012 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 06:25:41.861159 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 06:25:41.861751 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:25:41.871264 sudo[1875]: pam_unix(sudo:session): session closed for user root Jan 14 06:25:41.962489 sshd[1874]: Connection closed by 20.161.92.111 port 52128 Jan 14 06:25:41.964138 sshd-session[1870]: pam_unix(sshd:session): session closed for user core Jan 14 06:25:41.972794 systemd[1]: sshd@4-10.230.48.98:22-20.161.92.111:52128.service: Deactivated successfully. Jan 14 06:25:41.976277 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 06:25:41.978715 systemd-logind[1607]: Session 8 logged out. Waiting for processes to exit. Jan 14 06:25:41.981015 systemd-logind[1607]: Removed session 8. Jan 14 06:25:42.070403 systemd[1]: Started sshd@5-10.230.48.98:22-20.161.92.111:52140.service - OpenSSH per-connection server daemon (20.161.92.111:52140). Jan 14 06:25:42.584702 sshd[1882]: Accepted publickey for core from 20.161.92.111 port 52140 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:42.586501 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:42.594457 systemd-logind[1607]: New session 9 of user core. Jan 14 06:25:42.601946 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 06:25:42.774530 sudo[1888]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 06:25:42.775077 sudo[1888]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:25:42.783423 sudo[1888]: pam_unix(sudo:session): session closed for user root Jan 14 06:25:42.794305 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 06:25:42.794800 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:25:42.805732 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 06:25:42.866000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:25:42.868736 kernel: kauditd_printk_skb: 84 callbacks suppressed Jan 14 06:25:42.868835 kernel: audit: type=1305 audit(1768371942.866:223): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 06:25:42.870966 augenrules[1912]: No rules Jan 14 06:25:42.866000 audit[1912]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbf8cede0 a2=420 a3=0 items=0 ppid=1893 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:42.874495 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 06:25:42.875180 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 06:25:42.878666 kernel: audit: type=1300 audit(1768371942.866:223): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdbf8cede0 a2=420 a3=0 items=0 ppid=1893 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:42.878955 sudo[1887]: pam_unix(sudo:session): session closed for user root Jan 14 06:25:42.866000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:25:42.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.883789 kernel: audit: type=1327 audit(1768371942.866:223): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 06:25:42.883877 kernel: audit: type=1130 audit(1768371942.874:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.887602 kernel: audit: type=1131 audit(1768371942.874:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.878000 audit[1887]: USER_END pid=1887 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.891474 kernel: audit: type=1106 audit(1768371942.878:226): pid=1887 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.878000 audit[1887]: CRED_DISP pid=1887 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.895646 kernel: audit: type=1104 audit(1768371942.878:227): pid=1887 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.969730 sshd[1886]: Connection closed by 20.161.92.111 port 52140 Jan 14 06:25:42.970787 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jan 14 06:25:42.973000 audit[1882]: USER_END pid=1882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:42.976000 audit[1882]: CRED_DISP pid=1882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:42.982020 kernel: audit: type=1106 audit(1768371942.973:228): pid=1882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:42.982109 kernel: audit: type=1104 audit(1768371942.976:229): pid=1882 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:42.985422 systemd[1]: sshd@5-10.230.48.98:22-20.161.92.111:52140.service: Deactivated successfully. Jan 14 06:25:42.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.48.98:22-20.161.92.111:52140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.989072 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 06:25:42.990696 kernel: audit: type=1131 audit(1768371942.985:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.230.48.98:22-20.161.92.111:52140 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:42.992201 systemd-logind[1607]: Session 9 logged out. Waiting for processes to exit. Jan 14 06:25:42.994084 systemd-logind[1607]: Removed session 9. Jan 14 06:25:43.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.48.98:22-20.161.92.111:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:43.077408 systemd[1]: Started sshd@6-10.230.48.98:22-20.161.92.111:52154.service - OpenSSH per-connection server daemon (20.161.92.111:52154). Jan 14 06:25:43.597000 audit[1921]: USER_ACCT pid=1921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:43.598091 sshd[1921]: Accepted publickey for core from 20.161.92.111 port 52154 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:25:43.600327 sshd-session[1921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:25:43.598000 audit[1921]: CRED_ACQ pid=1921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:43.598000 audit[1921]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff7e941db0 a2=3 a3=0 items=0 ppid=1 pid=1921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:43.598000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:25:43.609619 systemd-logind[1607]: New session 10 of user core. Jan 14 06:25:43.618196 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 06:25:43.622000 audit[1921]: USER_START pid=1921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:43.625000 audit[1925]: CRED_ACQ pid=1925 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:25:43.787000 audit[1926]: USER_ACCT pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:43.788722 sudo[1926]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 06:25:43.788000 audit[1926]: CRED_REFR pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:43.788000 audit[1926]: USER_START pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:25:43.789198 sudo[1926]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 06:25:44.309469 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 06:25:44.325418 (dockerd)[1946]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 06:25:44.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.48.98:22-64.225.73.213:45040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:44.495051 systemd[1]: Started sshd@7-10.230.48.98:22-64.225.73.213:45040.service - OpenSSH per-connection server daemon (64.225.73.213:45040). Jan 14 06:25:44.715002 dockerd[1946]: time="2026-01-14T06:25:44.714543896Z" level=info msg="Starting up" Jan 14 06:25:44.716721 dockerd[1946]: time="2026-01-14T06:25:44.716686930Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 06:25:44.726264 sshd[1952]: Invalid user nagios from 64.225.73.213 port 45040 Jan 14 06:25:44.741763 dockerd[1946]: time="2026-01-14T06:25:44.741588601Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 06:25:44.795591 dockerd[1946]: time="2026-01-14T06:25:44.795532334Z" level=info msg="Loading containers: start." Jan 14 06:25:44.799441 sshd[1952]: Connection closed by invalid user nagios 64.225.73.213 port 45040 [preauth] Jan 14 06:25:44.801000 audit[1952]: USER_ERR pid=1952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:25:44.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.48.98:22-64.225.73.213:45040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:44.805035 systemd[1]: sshd@7-10.230.48.98:22-64.225.73.213:45040.service: Deactivated successfully. Jan 14 06:25:44.811711 kernel: Initializing XFRM netlink socket Jan 14 06:25:44.908000 audit[2003]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.908000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc1dce2930 a2=0 a3=0 items=0 ppid=1946 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.908000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:25:44.912000 audit[2005]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.912000 audit[2005]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdcc15bc70 a2=0 a3=0 items=0 ppid=1946 pid=2005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.912000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:25:44.915000 audit[2007]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.915000 audit[2007]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe8cd39a80 a2=0 a3=0 items=0 ppid=1946 pid=2007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:25:44.918000 audit[2009]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.918000 audit[2009]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd35adbe10 a2=0 a3=0 items=0 ppid=1946 pid=2009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.918000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:25:44.921000 audit[2011]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.921000 audit[2011]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff13ecbdb0 a2=0 a3=0 items=0 ppid=1946 pid=2011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:25:44.924000 audit[2013]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.924000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe891a66b0 a2=0 a3=0 items=0 ppid=1946 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.924000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:25:44.927000 audit[2015]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2015 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.927000 audit[2015]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffc9b2d73c0 a2=0 a3=0 items=0 ppid=1946 pid=2015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.927000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:25:44.930000 audit[2017]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.930000 audit[2017]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffe29fa0680 a2=0 a3=0 items=0 ppid=1946 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.930000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:25:44.969000 audit[2020]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.969000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=472 a0=3 a1=7fff8787afa0 a2=0 a3=0 items=0 ppid=1946 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 06:25:44.973000 audit[2022]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.973000 audit[2022]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffc478a0da0 a2=0 a3=0 items=0 ppid=1946 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.973000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:25:44.977000 audit[2024]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.977000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7fff32d76c40 a2=0 a3=0 items=0 ppid=1946 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.977000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:25:44.980000 audit[2026]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.980000 audit[2026]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7fff03749b00 a2=0 a3=0 items=0 ppid=1946 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.980000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:25:44.983000 audit[2028]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:44.983000 audit[2028]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffcea2fbad0 a2=0 a3=0 items=0 ppid=1946 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:44.983000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:25:45.036000 audit[2058]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2058 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.036000 audit[2058]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc57d8cef0 a2=0 a3=0 items=0 ppid=1946 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 06:25:45.039000 audit[2060]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.039000 audit[2060]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff1bd10ba0 a2=0 a3=0 items=0 ppid=1946 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 06:25:45.042000 audit[2062]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.042000 audit[2062]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87b986d0 a2=0 a3=0 items=0 ppid=1946 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.042000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 06:25:45.045000 audit[2064]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2064 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.045000 audit[2064]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffd3a8ba90 a2=0 a3=0 items=0 ppid=1946 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 06:25:45.049000 audit[2066]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.049000 audit[2066]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd45fe69e0 a2=0 a3=0 items=0 ppid=1946 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 06:25:45.052000 audit[2068]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.052000 audit[2068]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff466c9880 a2=0 a3=0 items=0 ppid=1946 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:25:45.055000 audit[2070]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2070 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.055000 audit[2070]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff3ddb2e50 a2=0 a3=0 items=0 ppid=1946 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.055000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:25:45.058000 audit[2072]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2072 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.058000 audit[2072]: SYSCALL arch=c000003e syscall=46 success=yes exit=384 a0=3 a1=7ffc332a5d60 a2=0 a3=0 items=0 ppid=1946 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 06:25:45.061000 audit[2074]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.061000 audit[2074]: SYSCALL arch=c000003e syscall=46 success=yes exit=484 a0=3 a1=7ffdaaed2400 a2=0 a3=0 items=0 ppid=1946 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 06:25:45.065000 audit[2076]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.065000 audit[2076]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7fff98a765c0 a2=0 a3=0 items=0 ppid=1946 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.065000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 06:25:45.068000 audit[2078]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.068000 audit[2078]: SYSCALL arch=c000003e syscall=46 success=yes exit=236 a0=3 a1=7ffec5352880 a2=0 a3=0 items=0 ppid=1946 pid=2078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.068000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 06:25:45.071000 audit[2080]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.071000 audit[2080]: SYSCALL arch=c000003e syscall=46 success=yes exit=248 a0=3 a1=7ffd5e255170 a2=0 a3=0 items=0 ppid=1946 pid=2080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.071000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 06:25:45.074000 audit[2082]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.074000 audit[2082]: SYSCALL arch=c000003e syscall=46 success=yes exit=232 a0=3 a1=7ffe138f8870 a2=0 a3=0 items=0 ppid=1946 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.074000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 06:25:45.082000 audit[2087]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.082000 audit[2087]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe2a260df0 a2=0 a3=0 items=0 ppid=1946 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.082000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:25:45.085000 audit[2089]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.085000 audit[2089]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffd7a3e0fc0 a2=0 a3=0 items=0 ppid=1946 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:25:45.088000 audit[2091]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2091 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.088000 audit[2091]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe02da2140 a2=0 a3=0 items=0 ppid=1946 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.088000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:25:45.091000 audit[2093]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.091000 audit[2093]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc1585030 a2=0 a3=0 items=0 ppid=1946 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.091000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 06:25:45.095000 audit[2095]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.095000 audit[2095]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffeaab7bf80 a2=0 a3=0 items=0 ppid=1946 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 06:25:45.098000 audit[2097]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:25:45.098000 audit[2097]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff1708cdb0 a2=0 a3=0 items=0 ppid=1946 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 06:25:45.109074 systemd-timesyncd[1517]: Network configuration changed, trying to establish connection. Jan 14 06:25:45.122000 audit[2101]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.122000 audit[2101]: SYSCALL arch=c000003e syscall=46 success=yes exit=520 a0=3 a1=7ffca26c6600 a2=0 a3=0 items=0 ppid=1946 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.122000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 06:25:45.125000 audit[2103]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.125000 audit[2103]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe83d0de90 a2=0 a3=0 items=0 ppid=1946 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.125000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 06:25:45.139000 audit[2111]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2111 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.139000 audit[2111]: SYSCALL arch=c000003e syscall=46 success=yes exit=300 a0=3 a1=7ffcc36059e0 a2=0 a3=0 items=0 ppid=1946 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 06:25:45.152000 audit[2117]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.152000 audit[2117]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe6f7af1f0 a2=0 a3=0 items=0 ppid=1946 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 06:25:45.156000 audit[2119]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.156000 audit[2119]: SYSCALL arch=c000003e syscall=46 success=yes exit=512 a0=3 a1=7ffc5733ef10 a2=0 a3=0 items=0 ppid=1946 pid=2119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 06:25:45.159000 audit[2121]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2121 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.159000 audit[2121]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcbb41d2d0 a2=0 a3=0 items=0 ppid=1946 pid=2121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 06:25:45.162000 audit[2123]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.162000 audit[2123]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc8fc9bbd0 a2=0 a3=0 items=0 ppid=1946 pid=2123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.162000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 06:25:45.166000 audit[2125]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:25:45.166000 audit[2125]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff7fee0c10 a2=0 a3=0 items=0 ppid=1946 pid=2125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:25:45.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 06:25:45.167892 systemd-networkd[1557]: docker0: Link UP Jan 14 06:25:45.172658 dockerd[1946]: time="2026-01-14T06:25:45.171742411Z" level=info msg="Loading containers: done." Jan 14 06:25:45.197931 dockerd[1946]: time="2026-01-14T06:25:45.197809468Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 06:25:45.198271 dockerd[1946]: time="2026-01-14T06:25:45.198237977Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 06:25:45.198535 dockerd[1946]: time="2026-01-14T06:25:45.198496185Z" level=info msg="Initializing buildkit" Jan 14 06:25:45.225486 dockerd[1946]: time="2026-01-14T06:25:45.225387477Z" level=info msg="Completed buildkit initialization" Jan 14 06:25:45.235794 dockerd[1946]: time="2026-01-14T06:25:45.235731356Z" level=info msg="Daemon has completed initialization" Jan 14 06:25:45.236411 dockerd[1946]: time="2026-01-14T06:25:45.235966446Z" level=info msg="API listen on /run/docker.sock" Jan 14 06:25:45.236306 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 06:25:45.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:46.459595 containerd[1636]: time="2026-01-14T06:25:46.459512636Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 06:25:48.261272 systemd-timesyncd[1517]: Contacted time server [2a00:d300:100:1::123]:123 (2.flatcar.pool.ntp.org). Jan 14 06:25:48.261309 systemd-resolved[1294]: Clock change detected. Flushing caches. Jan 14 06:25:48.261355 systemd-timesyncd[1517]: Initial clock synchronization to Wed 2026-01-14 06:25:48.260798 UTC. Jan 14 06:25:48.518526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3802659146.mount: Deactivated successfully. Jan 14 06:25:52.489778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 06:25:52.497800 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:25:52.691323 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:25:52.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:52.693050 kernel: kauditd_printk_skb: 135 callbacks suppressed Jan 14 06:25:52.693214 kernel: audit: type=1130 audit(1768371952.690:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:25:52.709324 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:25:52.772603 kubelet[2232]: E0114 06:25:52.770959 2232 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:25:52.777026 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:25:52.777241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:25:52.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:25:52.782712 kernel: audit: type=1131 audit(1768371952.776:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:25:52.778622 systemd[1]: kubelet.service: Consumed 214ms CPU time, 110.2M memory peak. Jan 14 06:25:53.114679 containerd[1636]: time="2026-01-14T06:25:53.113795793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:53.116718 containerd[1636]: time="2026-01-14T06:25:53.116657390Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=28990042" Jan 14 06:25:53.117551 containerd[1636]: time="2026-01-14T06:25:53.117478242Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:53.123600 containerd[1636]: time="2026-01-14T06:25:53.122840060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:53.125412 containerd[1636]: time="2026-01-14T06:25:53.124159982Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 5.345623907s" Jan 14 06:25:53.125412 containerd[1636]: time="2026-01-14T06:25:53.124246554Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 14 06:25:53.125740 containerd[1636]: time="2026-01-14T06:25:53.125706919Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 06:25:57.380239 containerd[1636]: time="2026-01-14T06:25:57.380121441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:57.382424 containerd[1636]: time="2026-01-14T06:25:57.382333920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26008626" Jan 14 06:25:57.384146 containerd[1636]: time="2026-01-14T06:25:57.383555209Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:57.386590 containerd[1636]: time="2026-01-14T06:25:57.386537858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:57.388243 containerd[1636]: time="2026-01-14T06:25:57.388061274Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 4.262313637s" Jan 14 06:25:57.388243 containerd[1636]: time="2026-01-14T06:25:57.388106739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 14 06:25:57.389576 containerd[1636]: time="2026-01-14T06:25:57.389530291Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 06:25:59.732478 containerd[1636]: time="2026-01-14T06:25:59.732399578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:59.734658 containerd[1636]: time="2026-01-14T06:25:59.734615836Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20149965" Jan 14 06:25:59.735368 containerd[1636]: time="2026-01-14T06:25:59.735288350Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:59.740173 containerd[1636]: time="2026-01-14T06:25:59.740115836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:25:59.743046 containerd[1636]: time="2026-01-14T06:25:59.742861648Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 2.35326629s" Jan 14 06:25:59.743046 containerd[1636]: time="2026-01-14T06:25:59.742925656Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 14 06:25:59.744008 containerd[1636]: time="2026-01-14T06:25:59.743510132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 06:26:00.370608 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 14 06:26:00.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:00.378747 kernel: audit: type=1131 audit(1768371960.369:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:00.387000 audit: BPF prog-id=61 op=UNLOAD Jan 14 06:26:00.390581 kernel: audit: type=1334 audit(1768371960.387:287): prog-id=61 op=UNLOAD Jan 14 06:26:02.525398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962087112.mount: Deactivated successfully. Jan 14 06:26:02.784299 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 06:26:02.788325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:03.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:03.014028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:03.021675 kernel: audit: type=1130 audit(1768371963.012:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:03.037967 (kubelet)[2265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:26:03.133435 kubelet[2265]: E0114 06:26:03.133357 2265 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:26:03.137201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:26:03.137454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:26:03.144599 kernel: audit: type=1131 audit(1768371963.136:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:26:03.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:26:03.144287 systemd[1]: kubelet.service: Consumed 241ms CPU time, 108.2M memory peak. Jan 14 06:26:03.656719 containerd[1636]: time="2026-01-14T06:26:03.656649339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:03.658182 containerd[1636]: time="2026-01-14T06:26:03.658153296Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31926374" Jan 14 06:26:03.658473 containerd[1636]: time="2026-01-14T06:26:03.658429255Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:03.661504 containerd[1636]: time="2026-01-14T06:26:03.661468392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:03.662792 containerd[1636]: time="2026-01-14T06:26:03.662742343Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 3.919197132s" Jan 14 06:26:03.662972 containerd[1636]: time="2026-01-14T06:26:03.662937824Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 14 06:26:03.665326 containerd[1636]: time="2026-01-14T06:26:03.665280620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 06:26:04.339107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3192346877.mount: Deactivated successfully. Jan 14 06:26:07.183659 containerd[1636]: time="2026-01-14T06:26:07.183572975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:07.185698 containerd[1636]: time="2026-01-14T06:26:07.185657080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20128467" Jan 14 06:26:07.186699 containerd[1636]: time="2026-01-14T06:26:07.186654067Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:07.190435 containerd[1636]: time="2026-01-14T06:26:07.190373734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:07.192071 containerd[1636]: time="2026-01-14T06:26:07.191723391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.526383606s" Jan 14 06:26:07.192071 containerd[1636]: time="2026-01-14T06:26:07.191765093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 14 06:26:07.192938 containerd[1636]: time="2026-01-14T06:26:07.192912686Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 06:26:08.021318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796226139.mount: Deactivated successfully. Jan 14 06:26:08.027126 containerd[1636]: time="2026-01-14T06:26:08.027064480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:26:08.028121 containerd[1636]: time="2026-01-14T06:26:08.028086440Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 06:26:08.028955 containerd[1636]: time="2026-01-14T06:26:08.028738815Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:26:08.031281 containerd[1636]: time="2026-01-14T06:26:08.031224447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 06:26:08.032473 containerd[1636]: time="2026-01-14T06:26:08.032274051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 839.199536ms" Jan 14 06:26:08.032473 containerd[1636]: time="2026-01-14T06:26:08.032320148Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 14 06:26:08.033533 containerd[1636]: time="2026-01-14T06:26:08.033337219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 06:26:08.595263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1555303687.mount: Deactivated successfully. Jan 14 06:26:12.730988 containerd[1636]: time="2026-01-14T06:26:12.730912933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:12.735735 containerd[1636]: time="2026-01-14T06:26:12.735698162Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=56977083" Jan 14 06:26:12.736521 containerd[1636]: time="2026-01-14T06:26:12.736458262Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:12.741242 containerd[1636]: time="2026-01-14T06:26:12.741190899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:12.744640 containerd[1636]: time="2026-01-14T06:26:12.744205822Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.710715316s" Jan 14 06:26:12.744640 containerd[1636]: time="2026-01-14T06:26:12.744245142Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 14 06:26:13.283826 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 06:26:13.287623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:13.614869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:13.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:13.620606 kernel: audit: type=1130 audit(1768371973.613:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:13.630082 (kubelet)[2404]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 06:26:13.708410 kubelet[2404]: E0114 06:26:13.708284 2404 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 06:26:13.712892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 06:26:13.713184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 06:26:13.716029 systemd[1]: kubelet.service: Consumed 236ms CPU time, 106.6M memory peak. Jan 14 06:26:13.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:26:13.722820 kernel: audit: type=1131 audit(1768371973.714:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:26:14.795649 update_engine[1608]: I20260114 06:26:14.794713 1608 update_attempter.cc:509] Updating boot flags... Jan 14 06:26:16.549991 systemd[1]: Started sshd@8-10.230.48.98:22-64.225.73.213:51936.service - OpenSSH per-connection server daemon (64.225.73.213:51936). Jan 14 06:26:16.558591 kernel: audit: type=1130 audit(1768371976.548:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.48.98:22-64.225.73.213:51936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:16.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.48.98:22-64.225.73.213:51936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:16.739034 sshd[2438]: Invalid user guest from 64.225.73.213 port 51936 Jan 14 06:26:16.774874 sshd[2438]: Connection closed by invalid user guest 64.225.73.213 port 51936 [preauth] Jan 14 06:26:16.773000 audit[2438]: USER_ERR pid=2438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:26:16.786599 kernel: audit: type=1109 audit(1768371976.773:293): pid=2438 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:26:16.785967 systemd[1]: sshd@8-10.230.48.98:22-64.225.73.213:51936.service: Deactivated successfully. Jan 14 06:26:16.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.48.98:22-64.225.73.213:51936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:16.796605 kernel: audit: type=1131 audit(1768371976.785:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.48.98:22-64.225.73.213:51936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:18.682872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:18.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:18.690242 systemd[1]: kubelet.service: Consumed 236ms CPU time, 106.6M memory peak. Jan 14 06:26:18.690610 kernel: audit: type=1130 audit(1768371978.682:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:18.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:18.697662 kernel: audit: type=1131 audit(1768371978.689:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:18.697948 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:18.742252 systemd[1]: Reload requested from client PID 2450 ('systemctl') (unit session-10.scope)... Jan 14 06:26:18.742487 systemd[1]: Reloading... Jan 14 06:26:18.899625 zram_generator::config[2495]: No configuration found. Jan 14 06:26:19.244121 systemd[1]: Reloading finished in 500 ms. Jan 14 06:26:19.288000 audit: BPF prog-id=65 op=LOAD Jan 14 06:26:19.298584 kernel: audit: type=1334 audit(1768371979.288:297): prog-id=65 op=LOAD Jan 14 06:26:19.298677 kernel: audit: type=1334 audit(1768371979.288:298): prog-id=45 op=UNLOAD Jan 14 06:26:19.298733 kernel: audit: type=1334 audit(1768371979.288:299): prog-id=66 op=LOAD Jan 14 06:26:19.298782 kernel: audit: type=1334 audit(1768371979.288:300): prog-id=67 op=LOAD Jan 14 06:26:19.298828 kernel: audit: type=1334 audit(1768371979.288:301): prog-id=46 op=UNLOAD Jan 14 06:26:19.288000 audit: BPF prog-id=45 op=UNLOAD Jan 14 06:26:19.288000 audit: BPF prog-id=66 op=LOAD Jan 14 06:26:19.288000 audit: BPF prog-id=67 op=LOAD Jan 14 06:26:19.288000 audit: BPF prog-id=46 op=UNLOAD Jan 14 06:26:19.288000 audit: BPF prog-id=47 op=UNLOAD Jan 14 06:26:19.299951 kernel: audit: type=1334 audit(1768371979.288:302): prog-id=47 op=UNLOAD Jan 14 06:26:19.289000 audit: BPF prog-id=68 op=LOAD Jan 14 06:26:19.302605 kernel: audit: type=1334 audit(1768371979.289:303): prog-id=68 op=LOAD Jan 14 06:26:19.302679 kernel: audit: type=1334 audit(1768371979.289:304): prog-id=64 op=UNLOAD Jan 14 06:26:19.289000 audit: BPF prog-id=64 op=UNLOAD Jan 14 06:26:19.291000 audit: BPF prog-id=69 op=LOAD Jan 14 06:26:19.291000 audit: BPF prog-id=51 op=UNLOAD Jan 14 06:26:19.292000 audit: BPF prog-id=70 op=LOAD Jan 14 06:26:19.292000 audit: BPF prog-id=71 op=LOAD Jan 14 06:26:19.292000 audit: BPF prog-id=52 op=UNLOAD Jan 14 06:26:19.292000 audit: BPF prog-id=53 op=UNLOAD Jan 14 06:26:19.296000 audit: BPF prog-id=72 op=LOAD Jan 14 06:26:19.296000 audit: BPF prog-id=57 op=UNLOAD Jan 14 06:26:19.299000 audit: BPF prog-id=73 op=LOAD Jan 14 06:26:19.299000 audit: BPF prog-id=48 op=UNLOAD Jan 14 06:26:19.300000 audit: BPF prog-id=74 op=LOAD Jan 14 06:26:19.300000 audit: BPF prog-id=75 op=LOAD Jan 14 06:26:19.300000 audit: BPF prog-id=49 op=UNLOAD Jan 14 06:26:19.300000 audit: BPF prog-id=50 op=UNLOAD Jan 14 06:26:19.303000 audit: BPF prog-id=76 op=LOAD Jan 14 06:26:19.303000 audit: BPF prog-id=42 op=UNLOAD Jan 14 06:26:19.303000 audit: BPF prog-id=77 op=LOAD Jan 14 06:26:19.303000 audit: BPF prog-id=78 op=LOAD Jan 14 06:26:19.303000 audit: BPF prog-id=43 op=UNLOAD Jan 14 06:26:19.303000 audit: BPF prog-id=44 op=UNLOAD Jan 14 06:26:19.305000 audit: BPF prog-id=79 op=LOAD Jan 14 06:26:19.305000 audit: BPF prog-id=41 op=UNLOAD Jan 14 06:26:19.306000 audit: BPF prog-id=80 op=LOAD Jan 14 06:26:19.306000 audit: BPF prog-id=81 op=LOAD Jan 14 06:26:19.306000 audit: BPF prog-id=54 op=UNLOAD Jan 14 06:26:19.306000 audit: BPF prog-id=55 op=UNLOAD Jan 14 06:26:19.319000 audit: BPF prog-id=82 op=LOAD Jan 14 06:26:19.319000 audit: BPF prog-id=58 op=UNLOAD Jan 14 06:26:19.319000 audit: BPF prog-id=83 op=LOAD Jan 14 06:26:19.319000 audit: BPF prog-id=84 op=LOAD Jan 14 06:26:19.319000 audit: BPF prog-id=59 op=UNLOAD Jan 14 06:26:19.320000 audit: BPF prog-id=60 op=UNLOAD Jan 14 06:26:19.320000 audit: BPF prog-id=85 op=LOAD Jan 14 06:26:19.320000 audit: BPF prog-id=56 op=UNLOAD Jan 14 06:26:19.341270 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 06:26:19.341391 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 06:26:19.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 06:26:19.341877 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:19.341956 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.7M memory peak. Jan 14 06:26:19.345147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:19.512965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:19.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:19.526277 (kubelet)[2566]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:26:19.644195 kubelet[2566]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:26:19.644195 kubelet[2566]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:26:19.644195 kubelet[2566]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:26:19.646500 kubelet[2566]: I0114 06:26:19.646403 2566 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:26:20.545593 kubelet[2566]: I0114 06:26:20.544540 2566 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 06:26:20.545593 kubelet[2566]: I0114 06:26:20.544607 2566 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:26:20.545593 kubelet[2566]: I0114 06:26:20.544904 2566 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 06:26:20.586373 kubelet[2566]: I0114 06:26:20.586025 2566 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:26:20.596824 kubelet[2566]: E0114 06:26:20.596710 2566 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.48.98:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 06:26:20.624742 kubelet[2566]: I0114 06:26:20.624703 2566 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:26:20.638153 kubelet[2566]: I0114 06:26:20.638123 2566 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:26:20.642533 kubelet[2566]: I0114 06:26:20.642454 2566 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:26:20.645637 kubelet[2566]: I0114 06:26:20.642505 2566 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i1yja.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:26:20.645637 kubelet[2566]: I0114 06:26:20.645632 2566 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:26:20.646381 kubelet[2566]: I0114 06:26:20.645655 2566 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 06:26:20.646776 kubelet[2566]: I0114 06:26:20.646734 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:26:20.650371 kubelet[2566]: I0114 06:26:20.649782 2566 kubelet.go:480] "Attempting to sync node with API server" Jan 14 06:26:20.650371 kubelet[2566]: I0114 06:26:20.649813 2566 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:26:20.650371 kubelet[2566]: I0114 06:26:20.649870 2566 kubelet.go:386] "Adding apiserver pod source" Jan 14 06:26:20.650371 kubelet[2566]: I0114 06:26:20.649904 2566 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:26:20.653750 kubelet[2566]: E0114 06:26:20.653705 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.48.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i1yja.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 06:26:20.658702 kubelet[2566]: E0114 06:26:20.658672 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.48.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 06:26:20.658944 kubelet[2566]: I0114 06:26:20.658920 2566 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:26:20.659683 kubelet[2566]: I0114 06:26:20.659659 2566 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 06:26:20.660506 kubelet[2566]: W0114 06:26:20.660484 2566 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 06:26:20.671164 kubelet[2566]: I0114 06:26:20.671141 2566 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:26:20.671376 kubelet[2566]: I0114 06:26:20.671350 2566 server.go:1289] "Started kubelet" Jan 14 06:26:20.676753 kubelet[2566]: I0114 06:26:20.676733 2566 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:26:20.684145 kubelet[2566]: E0114 06:26:20.678917 2566 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.48.98:6443/api/v1/namespaces/default/events\": dial tcp 10.230.48.98:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-i1yja.gb1.brightbox.com.188a84f27aaa5d43 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-i1yja.gb1.brightbox.com,UID:srv-i1yja.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-i1yja.gb1.brightbox.com,},FirstTimestamp:2026-01-14 06:26:20.671278403 +0000 UTC m=+1.139291027,LastTimestamp:2026-01-14 06:26:20.671278403 +0000 UTC m=+1.139291027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-i1yja.gb1.brightbox.com,}" Jan 14 06:26:20.686253 kubelet[2566]: I0114 06:26:20.686229 2566 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:26:20.686782 kubelet[2566]: E0114 06:26:20.686669 2566 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i1yja.gb1.brightbox.com\" not found" Jan 14 06:26:20.689295 kubelet[2566]: I0114 06:26:20.688987 2566 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:26:20.708289 kubelet[2566]: I0114 06:26:20.708208 2566 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:26:20.708483 kubelet[2566]: I0114 06:26:20.708348 2566 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:26:20.712498 kubelet[2566]: E0114 06:26:20.711618 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i1yja.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.98:6443: connect: connection refused" interval="200ms" Jan 14 06:26:20.712498 kubelet[2566]: E0114 06:26:20.711760 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.48.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 06:26:20.712498 kubelet[2566]: I0114 06:26:20.712076 2566 factory.go:223] Registration of the systemd container factory successfully Jan 14 06:26:20.712498 kubelet[2566]: I0114 06:26:20.712166 2566 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:26:20.730771 kubelet[2566]: I0114 06:26:20.730708 2566 server.go:317] "Adding debug handlers to kubelet server" Jan 14 06:26:20.730000 audit[2579]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.730000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc7f849800 a2=0 a3=0 items=0 ppid=2566 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:26:20.732000 audit[2581]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.732000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd2f987d40 a2=0 a3=0 items=0 ppid=2566 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:26:20.738765 kubelet[2566]: I0114 06:26:20.738640 2566 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:26:20.739480 kubelet[2566]: I0114 06:26:20.739443 2566 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:26:20.737000 audit[2584]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.737000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffe3b415ba0 a2=0 a3=0 items=0 ppid=2566 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:26:20.743464 kubelet[2566]: I0114 06:26:20.743298 2566 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:26:20.744669 kubelet[2566]: I0114 06:26:20.744640 2566 factory.go:223] Registration of the containerd container factory successfully Jan 14 06:26:20.747000 audit[2587]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.747000 audit[2587]: SYSCALL arch=c000003e syscall=46 success=yes exit=340 a0=3 a1=7ffdc3c5a130 a2=0 a3=0 items=0 ppid=2566 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:26:20.756923 kubelet[2566]: E0114 06:26:20.756886 2566 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:26:20.764000 audit[2591]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.764000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff5f94ff90 a2=0 a3=0 items=0 ppid=2566 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 06:26:20.774903 kubelet[2566]: I0114 06:26:20.774859 2566 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 06:26:20.777000 audit[2593]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:20.777000 audit[2593]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcb1d6e630 a2=0 a3=0 items=0 ppid=2566 pid=2593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 06:26:20.778000 audit[2594]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.778000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5ca88ae0 a2=0 a3=0 items=0 ppid=2566 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.778000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:26:20.781674 kubelet[2566]: I0114 06:26:20.781651 2566 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 06:26:20.781819 kubelet[2566]: I0114 06:26:20.781801 2566 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 06:26:20.782976 kubelet[2566]: I0114 06:26:20.782685 2566 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:26:20.782976 kubelet[2566]: I0114 06:26:20.782719 2566 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 06:26:20.782976 kubelet[2566]: E0114 06:26:20.782817 2566 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:26:20.783000 audit[2596]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.783000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc91156ef0 a2=0 a3=0 items=0 ppid=2566 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:26:20.785211 kubelet[2566]: E0114 06:26:20.785073 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.48.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 06:26:20.783000 audit[2598]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:20.783000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe6fd7b470 a2=0 a3=0 items=0 ppid=2566 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 06:26:20.787161 kubelet[2566]: E0114 06:26:20.787110 2566 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-i1yja.gb1.brightbox.com\" not found" Jan 14 06:26:20.787000 audit[2599]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:20.787000 audit[2599]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff7b6df60 a2=0 a3=0 items=0 ppid=2566 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.787000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 06:26:20.789000 audit[2600]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:20.789000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff366a8570 a2=0 a3=0 items=0 ppid=2566 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:26:20.789000 audit[2602]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2602 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:20.789000 audit[2602]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5e169270 a2=0 a3=0 items=0 ppid=2566 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:20.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 06:26:20.798434 kubelet[2566]: I0114 06:26:20.796224 2566 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:26:20.798434 kubelet[2566]: I0114 06:26:20.796358 2566 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:26:20.798434 kubelet[2566]: I0114 06:26:20.796414 2566 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:26:20.800190 kubelet[2566]: I0114 06:26:20.800170 2566 policy_none.go:49] "None policy: Start" Jan 14 06:26:20.800339 kubelet[2566]: I0114 06:26:20.800321 2566 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:26:20.800444 kubelet[2566]: I0114 06:26:20.800429 2566 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:26:20.811220 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 06:26:20.825370 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 06:26:20.845268 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 06:26:20.850605 kubelet[2566]: E0114 06:26:20.849741 2566 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 06:26:20.850605 kubelet[2566]: I0114 06:26:20.850073 2566 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:26:20.850605 kubelet[2566]: I0114 06:26:20.850100 2566 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:26:20.851430 kubelet[2566]: I0114 06:26:20.851410 2566 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:26:20.852367 kubelet[2566]: E0114 06:26:20.852342 2566 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:26:20.852521 kubelet[2566]: E0114 06:26:20.852497 2566 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-i1yja.gb1.brightbox.com\" not found" Jan 14 06:26:20.910548 kubelet[2566]: I0114 06:26:20.910477 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-ca-certs\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.910548 kubelet[2566]: I0114 06:26:20.910545 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-k8s-certs\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.910851 kubelet[2566]: I0114 06:26:20.910632 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.913194 kubelet[2566]: E0114 06:26:20.913141 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i1yja.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.98:6443: connect: connection refused" interval="400ms" Jan 14 06:26:20.940117 systemd[1]: Created slice kubepods-burstable-pod42447cfc0546aae68877a2e537db9c8d.slice - libcontainer container kubepods-burstable-pod42447cfc0546aae68877a2e537db9c8d.slice. Jan 14 06:26:20.951581 kubelet[2566]: E0114 06:26:20.951067 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.953968 kubelet[2566]: I0114 06:26:20.953942 2566 kubelet_node_status.go:75] "Attempting to register node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.955128 kubelet[2566]: E0114 06:26:20.955086 2566 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.48.98:6443/api/v1/nodes\": dial tcp 10.230.48.98:6443: connect: connection refused" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.956477 systemd[1]: Created slice kubepods-burstable-pod967c1e2e9f8a7ece6b847440ecb0c2ca.slice - libcontainer container kubepods-burstable-pod967c1e2e9f8a7ece6b847440ecb0c2ca.slice. Jan 14 06:26:20.960593 kubelet[2566]: E0114 06:26:20.960289 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:20.963546 systemd[1]: Created slice kubepods-burstable-pod55912d061cd714e18f6c6e544c0df762.slice - libcontainer container kubepods-burstable-pod55912d061cd714e18f6c6e544c0df762.slice. Jan 14 06:26:20.966383 kubelet[2566]: E0114 06:26:20.966360 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.012038 kubelet[2566]: I0114 06:26:21.011840 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-k8s-certs\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.012475 kubelet[2566]: I0114 06:26:21.012343 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-kubeconfig\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.012475 kubelet[2566]: I0114 06:26:21.012405 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/967c1e2e9f8a7ece6b847440ecb0c2ca-kubeconfig\") pod \"kube-scheduler-srv-i1yja.gb1.brightbox.com\" (UID: \"967c1e2e9f8a7ece6b847440ecb0c2ca\") " pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.013615 kubelet[2566]: I0114 06:26:21.013436 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-ca-certs\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.013615 kubelet[2566]: I0114 06:26:21.013486 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-flexvolume-dir\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.013615 kubelet[2566]: I0114 06:26:21.013516 2566 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.158477 kubelet[2566]: I0114 06:26:21.158335 2566 kubelet_node_status.go:75] "Attempting to register node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.158917 kubelet[2566]: E0114 06:26:21.158881 2566 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.48.98:6443/api/v1/nodes\": dial tcp 10.230.48.98:6443: connect: connection refused" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.254726 containerd[1636]: time="2026-01-14T06:26:21.254636621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i1yja.gb1.brightbox.com,Uid:42447cfc0546aae68877a2e537db9c8d,Namespace:kube-system,Attempt:0,}" Jan 14 06:26:21.262534 containerd[1636]: time="2026-01-14T06:26:21.262236930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i1yja.gb1.brightbox.com,Uid:967c1e2e9f8a7ece6b847440ecb0c2ca,Namespace:kube-system,Attempt:0,}" Jan 14 06:26:21.268147 containerd[1636]: time="2026-01-14T06:26:21.268115435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i1yja.gb1.brightbox.com,Uid:55912d061cd714e18f6c6e544c0df762,Namespace:kube-system,Attempt:0,}" Jan 14 06:26:21.314513 kubelet[2566]: E0114 06:26:21.314457 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i1yja.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.98:6443: connect: connection refused" interval="800ms" Jan 14 06:26:21.390599 containerd[1636]: time="2026-01-14T06:26:21.389705497Z" level=info msg="connecting to shim 12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1" address="unix:///run/containerd/s/e34ce939b471c3d574385d9b33bd47e6abe8d25601370fb779bf0e6328e3115d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:21.395121 containerd[1636]: time="2026-01-14T06:26:21.395086748Z" level=info msg="connecting to shim 68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6" address="unix:///run/containerd/s/544e37a5a853f947ade0a23ad3b6530fc4e9a8bae4d8c6a2334d7ea3cb6ea9b5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:21.396605 containerd[1636]: time="2026-01-14T06:26:21.396518470Z" level=info msg="connecting to shim c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc" address="unix:///run/containerd/s/8bca42cdba3f32018ad4f17f4d2b56512a1f0fd50ef8783359edcae3aa65202d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:21.516896 systemd[1]: Started cri-containerd-c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc.scope - libcontainer container c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc. Jan 14 06:26:21.532339 systemd[1]: Started cri-containerd-12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1.scope - libcontainer container 12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1. Jan 14 06:26:21.535680 systemd[1]: Started cri-containerd-68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6.scope - libcontainer container 68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6. Jan 14 06:26:21.562225 kubelet[2566]: I0114 06:26:21.562186 2566 kubelet_node_status.go:75] "Attempting to register node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.562808 kubelet[2566]: E0114 06:26:21.562756 2566 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.48.98:6443/api/v1/nodes\": dial tcp 10.230.48.98:6443: connect: connection refused" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:21.565000 audit: BPF prog-id=86 op=LOAD Jan 14 06:26:21.567000 audit: BPF prog-id=87 op=LOAD Jan 14 06:26:21.567000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.568000 audit: BPF prog-id=87 op=UNLOAD Jan 14 06:26:21.568000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.569000 audit: BPF prog-id=88 op=LOAD Jan 14 06:26:21.570000 audit: BPF prog-id=89 op=LOAD Jan 14 06:26:21.572000 audit: BPF prog-id=90 op=LOAD Jan 14 06:26:21.572000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128238 a2=98 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.572000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.573000 audit: BPF prog-id=90 op=UNLOAD Jan 14 06:26:21.573000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: BPF prog-id=91 op=LOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=91 op=UNLOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=92 op=LOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=93 op=LOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=93 op=UNLOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=92 op=UNLOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.574000 audit: BPF prog-id=94 op=LOAD Jan 14 06:26:21.574000 audit[2660]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2631 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336616432343366646335653464386132663261663833336134313631 Jan 14 06:26:21.575000 audit: BPF prog-id=95 op=LOAD Jan 14 06:26:21.575000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.575000 audit: BPF prog-id=96 op=LOAD Jan 14 06:26:21.575000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.575000 audit: BPF prog-id=96 op=UNLOAD Jan 14 06:26:21.575000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.575000 audit: BPF prog-id=95 op=UNLOAD Jan 14 06:26:21.575000 audit[2664]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.575000 audit: BPF prog-id=97 op=LOAD Jan 14 06:26:21.575000 audit[2664]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=2629 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.575000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638393635613738653861333435353430306137313831383238313039 Jan 14 06:26:21.573000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.576000 audit: BPF prog-id=98 op=LOAD Jan 14 06:26:21.576000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000128488 a2=98 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.577000 audit: BPF prog-id=99 op=LOAD Jan 14 06:26:21.577000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000128218 a2=98 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.579000 audit: BPF prog-id=99 op=UNLOAD Jan 14 06:26:21.579000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.579000 audit: BPF prog-id=98 op=UNLOAD Jan 14 06:26:21.579000 audit[2662]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.579000 audit: BPF prog-id=100 op=LOAD Jan 14 06:26:21.579000 audit[2662]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001286e8 a2=98 a3=0 items=0 ppid=2628 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132303730633933333135663931653333336135653130383463666138 Jan 14 06:26:21.586872 kubelet[2566]: E0114 06:26:21.586820 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.48.98:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-i1yja.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 06:26:21.678725 containerd[1636]: time="2026-01-14T06:26:21.678535201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-i1yja.gb1.brightbox.com,Uid:42447cfc0546aae68877a2e537db9c8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1\"" Jan 14 06:26:21.681085 containerd[1636]: time="2026-01-14T06:26:21.681004127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-i1yja.gb1.brightbox.com,Uid:967c1e2e9f8a7ece6b847440ecb0c2ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6\"" Jan 14 06:26:21.682611 containerd[1636]: time="2026-01-14T06:26:21.682543741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-i1yja.gb1.brightbox.com,Uid:55912d061cd714e18f6c6e544c0df762,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc\"" Jan 14 06:26:21.687263 containerd[1636]: time="2026-01-14T06:26:21.687222190Z" level=info msg="CreateContainer within sandbox \"68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 06:26:21.687916 containerd[1636]: time="2026-01-14T06:26:21.687864099Z" level=info msg="CreateContainer within sandbox \"12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 06:26:21.690592 containerd[1636]: time="2026-01-14T06:26:21.690403913Z" level=info msg="CreateContainer within sandbox \"c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 06:26:21.701160 containerd[1636]: time="2026-01-14T06:26:21.701129567Z" level=info msg="Container 5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:21.703830 containerd[1636]: time="2026-01-14T06:26:21.703762073Z" level=info msg="Container 4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:21.711635 containerd[1636]: time="2026-01-14T06:26:21.711598603Z" level=info msg="Container 615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:21.720306 containerd[1636]: time="2026-01-14T06:26:21.720089128Z" level=info msg="CreateContainer within sandbox \"68965a78e8a3455400a71818281091f310f43ed25f28ba5815eb8c5a1eeb9eb6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75\"" Jan 14 06:26:21.721416 containerd[1636]: time="2026-01-14T06:26:21.721386292Z" level=info msg="CreateContainer within sandbox \"12070c93315f91e333a5e1084cfa8dcd7031d51f18390880115e8b159c9ffce1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845\"" Jan 14 06:26:21.722100 containerd[1636]: time="2026-01-14T06:26:21.722048828Z" level=info msg="StartContainer for \"615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75\"" Jan 14 06:26:21.722256 containerd[1636]: time="2026-01-14T06:26:21.722060742Z" level=info msg="StartContainer for \"5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845\"" Jan 14 06:26:21.724743 containerd[1636]: time="2026-01-14T06:26:21.724692905Z" level=info msg="CreateContainer within sandbox \"c6ad243fdc5e4d8a2f2af833a41618c448ea538c63529950f8af29e8ef4e2ffc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414\"" Jan 14 06:26:21.725006 containerd[1636]: time="2026-01-14T06:26:21.724902026Z" level=info msg="connecting to shim 5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845" address="unix:///run/containerd/s/e34ce939b471c3d574385d9b33bd47e6abe8d25601370fb779bf0e6328e3115d" protocol=ttrpc version=3 Jan 14 06:26:21.726595 containerd[1636]: time="2026-01-14T06:26:21.725864969Z" level=info msg="StartContainer for \"4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414\"" Jan 14 06:26:21.727220 containerd[1636]: time="2026-01-14T06:26:21.727188877Z" level=info msg="connecting to shim 4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414" address="unix:///run/containerd/s/8bca42cdba3f32018ad4f17f4d2b56512a1f0fd50ef8783359edcae3aa65202d" protocol=ttrpc version=3 Jan 14 06:26:21.729780 containerd[1636]: time="2026-01-14T06:26:21.729732752Z" level=info msg="connecting to shim 615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75" address="unix:///run/containerd/s/544e37a5a853f947ade0a23ad3b6530fc4e9a8bae4d8c6a2334d7ea3cb6ea9b5" protocol=ttrpc version=3 Jan 14 06:26:21.753830 systemd[1]: Started cri-containerd-5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845.scope - libcontainer container 5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845. Jan 14 06:26:21.781797 systemd[1]: Started cri-containerd-4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414.scope - libcontainer container 4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414. Jan 14 06:26:21.795062 systemd[1]: Started cri-containerd-615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75.scope - libcontainer container 615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75. Jan 14 06:26:21.801000 audit: BPF prog-id=101 op=LOAD Jan 14 06:26:21.803000 audit: BPF prog-id=102 op=LOAD Jan 14 06:26:21.803000 audit[2744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.803000 audit: BPF prog-id=102 op=UNLOAD Jan 14 06:26:21.803000 audit[2744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.803000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.804000 audit: BPF prog-id=103 op=LOAD Jan 14 06:26:21.804000 audit[2744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.804000 audit: BPF prog-id=104 op=LOAD Jan 14 06:26:21.804000 audit[2744]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.805000 audit: BPF prog-id=104 op=UNLOAD Jan 14 06:26:21.805000 audit[2744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.806000 audit: BPF prog-id=103 op=UNLOAD Jan 14 06:26:21.806000 audit[2744]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.806000 audit: BPF prog-id=105 op=LOAD Jan 14 06:26:21.806000 audit[2744]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2628 pid=2744 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563653131373435323265356561353761656663383534346436356264 Jan 14 06:26:21.819808 kubelet[2566]: E0114 06:26:21.819744 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.48.98:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 06:26:21.825000 audit: BPF prog-id=106 op=LOAD Jan 14 06:26:21.826000 audit: BPF prog-id=107 op=LOAD Jan 14 06:26:21.826000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.826000 audit: BPF prog-id=107 op=UNLOAD Jan 14 06:26:21.826000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.826000 audit: BPF prog-id=108 op=LOAD Jan 14 06:26:21.826000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.826000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.827000 audit: BPF prog-id=109 op=LOAD Jan 14 06:26:21.827000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.827000 audit: BPF prog-id=109 op=UNLOAD Jan 14 06:26:21.827000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.827000 audit: BPF prog-id=108 op=UNLOAD Jan 14 06:26:21.827000 audit[2745]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.827000 audit: BPF prog-id=110 op=LOAD Jan 14 06:26:21.827000 audit[2745]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=2631 pid=2745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462353363623236313934623566376262313933386161663662313236 Jan 14 06:26:21.864000 audit: BPF prog-id=111 op=LOAD Jan 14 06:26:21.865000 audit: BPF prog-id=112 op=LOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=112 op=UNLOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=113 op=LOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=114 op=LOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=114 op=UNLOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=113 op=UNLOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.865000 audit: BPF prog-id=115 op=LOAD Jan 14 06:26:21.865000 audit[2750]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=2629 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:21.865000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631356163626435616464643833343938656331353735313461346162 Jan 14 06:26:21.905252 containerd[1636]: time="2026-01-14T06:26:21.904922897Z" level=info msg="StartContainer for \"5ce1174522e5ea57aefc8544d65bdc6a07e6885447fee0e5dc49b5ce98ef9845\" returns successfully" Jan 14 06:26:21.924091 containerd[1636]: time="2026-01-14T06:26:21.923732492Z" level=info msg="StartContainer for \"4b53cb26194b5f7bb1938aaf6b1262b3c01c9cdfe43486ad96faa9b84b4ed414\" returns successfully" Jan 14 06:26:21.949436 containerd[1636]: time="2026-01-14T06:26:21.949391144Z" level=info msg="StartContainer for \"615acbd5addd83498ec157514a4ab8afccf9958856df9e596a2a5373f4691a75\" returns successfully" Jan 14 06:26:22.039792 kubelet[2566]: E0114 06:26:22.039627 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.48.98:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 06:26:22.116436 kubelet[2566]: E0114 06:26:22.116376 2566 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.48.98:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-i1yja.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.48.98:6443: connect: connection refused" interval="1.6s" Jan 14 06:26:22.240416 kubelet[2566]: E0114 06:26:22.240367 2566 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.48.98:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.48.98:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 06:26:22.366899 kubelet[2566]: I0114 06:26:22.366764 2566 kubelet_node_status.go:75] "Attempting to register node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:22.828414 kubelet[2566]: E0114 06:26:22.828377 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:22.829027 kubelet[2566]: E0114 06:26:22.828708 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:22.837761 kubelet[2566]: E0114 06:26:22.837724 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:23.838388 kubelet[2566]: E0114 06:26:23.838344 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:23.840332 kubelet[2566]: E0114 06:26:23.840294 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:23.841071 kubelet[2566]: E0114 06:26:23.841045 2566 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.661356 kubelet[2566]: I0114 06:26:24.661018 2566 apiserver.go:52] "Watching apiserver" Jan 14 06:26:24.709239 kubelet[2566]: I0114 06:26:24.709189 2566 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:26:24.730290 kubelet[2566]: E0114 06:26:24.730220 2566 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-i1yja.gb1.brightbox.com\" not found" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.833528 kubelet[2566]: I0114 06:26:24.833342 2566 kubelet_node_status.go:78] "Successfully registered node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.833528 kubelet[2566]: E0114 06:26:24.833386 2566 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-i1yja.gb1.brightbox.com\": node \"srv-i1yja.gb1.brightbox.com\" not found" Jan 14 06:26:24.841759 kubelet[2566]: I0114 06:26:24.840059 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.842973 kubelet[2566]: I0114 06:26:24.842945 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.857125 kubelet[2566]: E0114 06:26:24.856882 2566 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i1yja.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.857284 kubelet[2566]: E0114 06:26:24.857261 2566 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.908762 kubelet[2566]: I0114 06:26:24.908698 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.916675 kubelet[2566]: E0114 06:26:24.916485 2566 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.916675 kubelet[2566]: I0114 06:26:24.916519 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.921806 kubelet[2566]: E0114 06:26:24.920853 2566 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.921806 kubelet[2566]: I0114 06:26:24.921598 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:24.923743 kubelet[2566]: E0114 06:26:24.923714 2566 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i1yja.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:25.843681 kubelet[2566]: I0114 06:26:25.843209 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:25.864149 kubelet[2566]: I0114 06:26:25.864115 2566 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 06:26:28.513581 kubelet[2566]: I0114 06:26:28.513268 2566 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:28.529173 kubelet[2566]: I0114 06:26:28.528861 2566 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 06:26:29.090232 systemd[1]: Reload requested from client PID 2849 ('systemctl') (unit session-10.scope)... Jan 14 06:26:29.090845 systemd[1]: Reloading... Jan 14 06:26:29.239636 zram_generator::config[2893]: No configuration found. Jan 14 06:26:29.626795 systemd[1]: Reloading finished in 535 ms. Jan 14 06:26:29.662970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:29.674216 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 06:26:29.674777 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:29.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:29.678029 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 06:26:29.678125 kernel: audit: type=1131 audit(1768371989.674:401): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:29.682643 systemd[1]: kubelet.service: Consumed 1.676s CPU time, 129M memory peak. Jan 14 06:26:29.686521 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 06:26:29.690512 kernel: audit: type=1334 audit(1768371989.686:402): prog-id=116 op=LOAD Jan 14 06:26:29.690634 kernel: audit: type=1334 audit(1768371989.686:403): prog-id=79 op=UNLOAD Jan 14 06:26:29.686000 audit: BPF prog-id=116 op=LOAD Jan 14 06:26:29.686000 audit: BPF prog-id=79 op=UNLOAD Jan 14 06:26:29.688000 audit: BPF prog-id=117 op=LOAD Jan 14 06:26:29.694619 kernel: audit: type=1334 audit(1768371989.688:404): prog-id=117 op=LOAD Jan 14 06:26:29.700833 kernel: audit: type=1334 audit(1768371989.688:405): prog-id=82 op=UNLOAD Jan 14 06:26:29.700925 kernel: audit: type=1334 audit(1768371989.688:406): prog-id=118 op=LOAD Jan 14 06:26:29.688000 audit: BPF prog-id=82 op=UNLOAD Jan 14 06:26:29.688000 audit: BPF prog-id=118 op=LOAD Jan 14 06:26:29.688000 audit: BPF prog-id=119 op=LOAD Jan 14 06:26:29.689000 audit: BPF prog-id=83 op=UNLOAD Jan 14 06:26:29.702986 kernel: audit: type=1334 audit(1768371989.688:407): prog-id=119 op=LOAD Jan 14 06:26:29.703059 kernel: audit: type=1334 audit(1768371989.689:408): prog-id=83 op=UNLOAD Jan 14 06:26:29.689000 audit: BPF prog-id=84 op=UNLOAD Jan 14 06:26:29.705276 kernel: audit: type=1334 audit(1768371989.689:409): prog-id=84 op=UNLOAD Jan 14 06:26:29.705386 kernel: audit: type=1334 audit(1768371989.690:410): prog-id=120 op=LOAD Jan 14 06:26:29.690000 audit: BPF prog-id=120 op=LOAD Jan 14 06:26:29.690000 audit: BPF prog-id=73 op=UNLOAD Jan 14 06:26:29.690000 audit: BPF prog-id=121 op=LOAD Jan 14 06:26:29.690000 audit: BPF prog-id=122 op=LOAD Jan 14 06:26:29.690000 audit: BPF prog-id=74 op=UNLOAD Jan 14 06:26:29.690000 audit: BPF prog-id=75 op=UNLOAD Jan 14 06:26:29.691000 audit: BPF prog-id=123 op=LOAD Jan 14 06:26:29.691000 audit: BPF prog-id=69 op=UNLOAD Jan 14 06:26:29.691000 audit: BPF prog-id=124 op=LOAD Jan 14 06:26:29.691000 audit: BPF prog-id=125 op=LOAD Jan 14 06:26:29.691000 audit: BPF prog-id=70 op=UNLOAD Jan 14 06:26:29.691000 audit: BPF prog-id=71 op=UNLOAD Jan 14 06:26:29.692000 audit: BPF prog-id=126 op=LOAD Jan 14 06:26:29.692000 audit: BPF prog-id=76 op=UNLOAD Jan 14 06:26:29.692000 audit: BPF prog-id=127 op=LOAD Jan 14 06:26:29.692000 audit: BPF prog-id=128 op=LOAD Jan 14 06:26:29.693000 audit: BPF prog-id=77 op=UNLOAD Jan 14 06:26:29.693000 audit: BPF prog-id=78 op=UNLOAD Jan 14 06:26:29.694000 audit: BPF prog-id=129 op=LOAD Jan 14 06:26:29.694000 audit: BPF prog-id=85 op=UNLOAD Jan 14 06:26:29.696000 audit: BPF prog-id=130 op=LOAD Jan 14 06:26:29.696000 audit: BPF prog-id=65 op=UNLOAD Jan 14 06:26:29.697000 audit: BPF prog-id=131 op=LOAD Jan 14 06:26:29.697000 audit: BPF prog-id=132 op=LOAD Jan 14 06:26:29.697000 audit: BPF prog-id=66 op=UNLOAD Jan 14 06:26:29.697000 audit: BPF prog-id=67 op=UNLOAD Jan 14 06:26:29.697000 audit: BPF prog-id=133 op=LOAD Jan 14 06:26:29.697000 audit: BPF prog-id=134 op=LOAD Jan 14 06:26:29.697000 audit: BPF prog-id=80 op=UNLOAD Jan 14 06:26:29.697000 audit: BPF prog-id=81 op=UNLOAD Jan 14 06:26:29.698000 audit: BPF prog-id=135 op=LOAD Jan 14 06:26:29.698000 audit: BPF prog-id=72 op=UNLOAD Jan 14 06:26:29.700000 audit: BPF prog-id=136 op=LOAD Jan 14 06:26:29.700000 audit: BPF prog-id=68 op=UNLOAD Jan 14 06:26:29.981867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 06:26:29.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:29.995950 (kubelet)[2961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 06:26:30.084239 kubelet[2961]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:26:30.084239 kubelet[2961]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 06:26:30.084239 kubelet[2961]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 06:26:30.085269 kubelet[2961]: I0114 06:26:30.085021 2961 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 06:26:30.094554 kubelet[2961]: I0114 06:26:30.094527 2961 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 06:26:30.094760 kubelet[2961]: I0114 06:26:30.094729 2961 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 06:26:30.095170 kubelet[2961]: I0114 06:26:30.095151 2961 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 06:26:30.097225 kubelet[2961]: I0114 06:26:30.097202 2961 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 06:26:30.106602 kubelet[2961]: I0114 06:26:30.106554 2961 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 06:26:30.123681 kubelet[2961]: I0114 06:26:30.123644 2961 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 06:26:30.139462 kubelet[2961]: I0114 06:26:30.139404 2961 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 06:26:30.139971 kubelet[2961]: I0114 06:26:30.139922 2961 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 06:26:30.140162 kubelet[2961]: I0114 06:26:30.139962 2961 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-i1yja.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 06:26:30.140399 kubelet[2961]: I0114 06:26:30.140174 2961 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 06:26:30.140399 kubelet[2961]: I0114 06:26:30.140194 2961 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 06:26:30.140399 kubelet[2961]: I0114 06:26:30.140271 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:26:30.140990 kubelet[2961]: I0114 06:26:30.140968 2961 kubelet.go:480] "Attempting to sync node with API server" Jan 14 06:26:30.141072 kubelet[2961]: I0114 06:26:30.140999 2961 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 06:26:30.141072 kubelet[2961]: I0114 06:26:30.141041 2961 kubelet.go:386] "Adding apiserver pod source" Jan 14 06:26:30.141157 kubelet[2961]: I0114 06:26:30.141075 2961 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 06:26:30.152020 kubelet[2961]: I0114 06:26:30.150649 2961 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 06:26:30.152020 kubelet[2961]: I0114 06:26:30.151373 2961 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 06:26:30.161581 kubelet[2961]: I0114 06:26:30.161544 2961 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 06:26:30.162627 kubelet[2961]: I0114 06:26:30.162608 2961 server.go:1289] "Started kubelet" Jan 14 06:26:30.167663 kubelet[2961]: I0114 06:26:30.163421 2961 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 06:26:30.168590 kubelet[2961]: I0114 06:26:30.168546 2961 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 06:26:30.170416 kubelet[2961]: I0114 06:26:30.170386 2961 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 06:26:30.175735 kubelet[2961]: I0114 06:26:30.175646 2961 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 06:26:30.190681 kubelet[2961]: I0114 06:26:30.190634 2961 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 06:26:30.198293 kubelet[2961]: I0114 06:26:30.196168 2961 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 06:26:30.198899 kubelet[2961]: I0114 06:26:30.198867 2961 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 06:26:30.199209 kubelet[2961]: I0114 06:26:30.199114 2961 reconciler.go:26] "Reconciler: start to sync state" Jan 14 06:26:30.203714 kubelet[2961]: E0114 06:26:30.203656 2961 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 06:26:30.205293 kubelet[2961]: I0114 06:26:30.205263 2961 factory.go:223] Registration of the systemd container factory successfully Jan 14 06:26:30.206666 kubelet[2961]: I0114 06:26:30.206614 2961 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 06:26:30.210409 kubelet[2961]: I0114 06:26:30.208929 2961 server.go:317] "Adding debug handlers to kubelet server" Jan 14 06:26:30.220534 kubelet[2961]: I0114 06:26:30.220505 2961 factory.go:223] Registration of the containerd container factory successfully Jan 14 06:26:30.248962 kubelet[2961]: I0114 06:26:30.247618 2961 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 06:26:30.282590 kubelet[2961]: I0114 06:26:30.281914 2961 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 06:26:30.282590 kubelet[2961]: I0114 06:26:30.281961 2961 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 06:26:30.282590 kubelet[2961]: I0114 06:26:30.282008 2961 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 06:26:30.282590 kubelet[2961]: I0114 06:26:30.282029 2961 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 06:26:30.282590 kubelet[2961]: E0114 06:26:30.282107 2961 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 06:26:30.338669 kubelet[2961]: I0114 06:26:30.338622 2961 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 06:26:30.338669 kubelet[2961]: I0114 06:26:30.338650 2961 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 06:26:30.338669 kubelet[2961]: I0114 06:26:30.338681 2961 state_mem.go:36] "Initialized new in-memory state store" Jan 14 06:26:30.339059 kubelet[2961]: I0114 06:26:30.338946 2961 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 06:26:30.339059 kubelet[2961]: I0114 06:26:30.338981 2961 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 06:26:30.339059 kubelet[2961]: I0114 06:26:30.339011 2961 policy_none.go:49] "None policy: Start" Jan 14 06:26:30.339059 kubelet[2961]: I0114 06:26:30.339037 2961 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 06:26:30.339270 kubelet[2961]: I0114 06:26:30.339065 2961 state_mem.go:35] "Initializing new in-memory state store" Jan 14 06:26:30.339270 kubelet[2961]: I0114 06:26:30.339252 2961 state_mem.go:75] "Updated machine memory state" Jan 14 06:26:30.351302 kubelet[2961]: E0114 06:26:30.350836 2961 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 06:26:30.351302 kubelet[2961]: I0114 06:26:30.351203 2961 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 06:26:30.351471 kubelet[2961]: I0114 06:26:30.351224 2961 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 06:26:30.354109 kubelet[2961]: I0114 06:26:30.353469 2961 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 06:26:30.358111 kubelet[2961]: E0114 06:26:30.356710 2961 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 06:26:30.387589 kubelet[2961]: I0114 06:26:30.387347 2961 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.390588 kubelet[2961]: I0114 06:26:30.389284 2961 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.392309 kubelet[2961]: I0114 06:26:30.391854 2961 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402170 kubelet[2961]: I0114 06:26:30.401116 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-ca-certs\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402170 kubelet[2961]: I0114 06:26:30.401285 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-flexvolume-dir\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402170 kubelet[2961]: I0114 06:26:30.401403 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/967c1e2e9f8a7ece6b847440ecb0c2ca-kubeconfig\") pod \"kube-scheduler-srv-i1yja.gb1.brightbox.com\" (UID: \"967c1e2e9f8a7ece6b847440ecb0c2ca\") " pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402170 kubelet[2961]: I0114 06:26:30.401536 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-ca-certs\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402170 kubelet[2961]: I0114 06:26:30.401660 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-k8s-certs\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402652 kubelet[2961]: I0114 06:26:30.401695 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42447cfc0546aae68877a2e537db9c8d-usr-share-ca-certificates\") pod \"kube-apiserver-srv-i1yja.gb1.brightbox.com\" (UID: \"42447cfc0546aae68877a2e537db9c8d\") " pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402652 kubelet[2961]: I0114 06:26:30.401783 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-k8s-certs\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402652 kubelet[2961]: I0114 06:26:30.401922 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-kubeconfig\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.402652 kubelet[2961]: I0114 06:26:30.401991 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/55912d061cd714e18f6c6e544c0df762-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" (UID: \"55912d061cd714e18f6c6e544c0df762\") " pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.416819 kubelet[2961]: I0114 06:26:30.415510 2961 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 06:26:30.423642 kubelet[2961]: I0114 06:26:30.423461 2961 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 06:26:30.423642 kubelet[2961]: E0114 06:26:30.423544 2961 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-i1yja.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.429829 kubelet[2961]: I0114 06:26:30.429804 2961 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 14 06:26:30.430011 kubelet[2961]: E0114 06:26:30.429979 2961 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-i1yja.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.481640 kubelet[2961]: I0114 06:26:30.480265 2961 kubelet_node_status.go:75] "Attempting to register node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.493146 kubelet[2961]: I0114 06:26:30.493097 2961 kubelet_node_status.go:124] "Node was previously registered" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:30.493273 kubelet[2961]: I0114 06:26:30.493217 2961 kubelet_node_status.go:78] "Successfully registered node" node="srv-i1yja.gb1.brightbox.com" Jan 14 06:26:31.143842 kubelet[2961]: I0114 06:26:31.143750 2961 apiserver.go:52] "Watching apiserver" Jan 14 06:26:31.199952 kubelet[2961]: I0114 06:26:31.199891 2961 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 06:26:31.423887 kubelet[2961]: I0114 06:26:31.423411 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-i1yja.gb1.brightbox.com" podStartSLOduration=3.423381203 podStartE2EDuration="3.423381203s" podCreationTimestamp="2026-01-14 06:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:26:31.421324179 +0000 UTC m=+1.407083723" watchObservedRunningTime="2026-01-14 06:26:31.423381203 +0000 UTC m=+1.409140736" Jan 14 06:26:31.424608 kubelet[2961]: I0114 06:26:31.423540 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-i1yja.gb1.brightbox.com" podStartSLOduration=6.423532149 podStartE2EDuration="6.423532149s" podCreationTimestamp="2026-01-14 06:26:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:26:31.380421089 +0000 UTC m=+1.366180633" watchObservedRunningTime="2026-01-14 06:26:31.423532149 +0000 UTC m=+1.409291699" Jan 14 06:26:31.496330 kubelet[2961]: I0114 06:26:31.496201 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-i1yja.gb1.brightbox.com" podStartSLOduration=1.496173916 podStartE2EDuration="1.496173916s" podCreationTimestamp="2026-01-14 06:26:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:26:31.495943103 +0000 UTC m=+1.481702651" watchObservedRunningTime="2026-01-14 06:26:31.496173916 +0000 UTC m=+1.481933448" Jan 14 06:26:33.897588 kubelet[2961]: I0114 06:26:33.897464 2961 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 06:26:33.898688 containerd[1636]: time="2026-01-14T06:26:33.898642592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 06:26:33.899466 kubelet[2961]: I0114 06:26:33.899333 2961 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 06:26:34.801880 systemd[1]: Created slice kubepods-besteffort-pode0dc062f_6ceb_4e32_8bb4_4cc1f1f5423a.slice - libcontainer container kubepods-besteffort-pode0dc062f_6ceb_4e32_8bb4_4cc1f1f5423a.slice. Jan 14 06:26:34.828592 kubelet[2961]: I0114 06:26:34.828427 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a-xtables-lock\") pod \"kube-proxy-qmw2r\" (UID: \"e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a\") " pod="kube-system/kube-proxy-qmw2r" Jan 14 06:26:34.828592 kubelet[2961]: I0114 06:26:34.828493 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a-lib-modules\") pod \"kube-proxy-qmw2r\" (UID: \"e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a\") " pod="kube-system/kube-proxy-qmw2r" Jan 14 06:26:34.828592 kubelet[2961]: I0114 06:26:34.828541 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wb2w\" (UniqueName: \"kubernetes.io/projected/e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a-kube-api-access-7wb2w\") pod \"kube-proxy-qmw2r\" (UID: \"e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a\") " pod="kube-system/kube-proxy-qmw2r" Jan 14 06:26:34.829249 kubelet[2961]: I0114 06:26:34.829152 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a-kube-proxy\") pod \"kube-proxy-qmw2r\" (UID: \"e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a\") " pod="kube-system/kube-proxy-qmw2r" Jan 14 06:26:35.091068 systemd[1]: Created slice kubepods-besteffort-pod4a4ad63e_3b1e_41fc_944d_0d5c09453f6d.slice - libcontainer container kubepods-besteffort-pod4a4ad63e_3b1e_41fc_944d_0d5c09453f6d.slice. Jan 14 06:26:35.114086 containerd[1636]: time="2026-01-14T06:26:35.114025116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmw2r,Uid:e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a,Namespace:kube-system,Attempt:0,}" Jan 14 06:26:35.134524 kubelet[2961]: I0114 06:26:35.131839 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4a4ad63e-3b1e-41fc-944d-0d5c09453f6d-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jtkkr\" (UID: \"4a4ad63e-3b1e-41fc-944d-0d5c09453f6d\") " pod="tigera-operator/tigera-operator-7dcd859c48-jtkkr" Jan 14 06:26:35.134524 kubelet[2961]: I0114 06:26:35.131932 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbffz\" (UniqueName: \"kubernetes.io/projected/4a4ad63e-3b1e-41fc-944d-0d5c09453f6d-kube-api-access-pbffz\") pod \"tigera-operator-7dcd859c48-jtkkr\" (UID: \"4a4ad63e-3b1e-41fc-944d-0d5c09453f6d\") " pod="tigera-operator/tigera-operator-7dcd859c48-jtkkr" Jan 14 06:26:35.158008 containerd[1636]: time="2026-01-14T06:26:35.157960631Z" level=info msg="connecting to shim f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d" address="unix:///run/containerd/s/7ad06d517a3327612e9763de15fcf512caf541592e66134e4816964b9376bec5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:35.207889 systemd[1]: Started cri-containerd-f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d.scope - libcontainer container f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d. Jan 14 06:26:35.228000 audit: BPF prog-id=137 op=LOAD Jan 14 06:26:35.232506 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 14 06:26:35.232616 kernel: audit: type=1334 audit(1768371995.228:445): prog-id=137 op=LOAD Jan 14 06:26:35.234000 audit: BPF prog-id=138 op=LOAD Jan 14 06:26:35.234000 audit[3036]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.239708 kernel: audit: type=1334 audit(1768371995.234:446): prog-id=138 op=LOAD Jan 14 06:26:35.239778 kernel: audit: type=1300 audit(1768371995.234:446): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c238 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.234000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.244765 kernel: audit: type=1327 audit(1768371995.234:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=138 op=UNLOAD Jan 14 06:26:35.248817 kernel: audit: type=1334 audit(1768371995.235:447): prog-id=138 op=UNLOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.251488 kernel: audit: type=1300 audit(1768371995.235:447): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.256624 kernel: audit: type=1327 audit(1768371995.235:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=139 op=LOAD Jan 14 06:26:35.261426 kernel: audit: type=1334 audit(1768371995.235:448): prog-id=139 op=LOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.263733 kernel: audit: type=1300 audit(1768371995.235:448): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c488 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.268831 kernel: audit: type=1327 audit(1768371995.235:448): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=140 op=LOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00010c218 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=140 op=UNLOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=139 op=UNLOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.235000 audit: BPF prog-id=141 op=LOAD Jan 14 06:26:35.235000 audit[3036]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00010c6e8 a2=98 a3=0 items=0 ppid=3023 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.235000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633623636356435623964373535393566323062306134326434363231 Jan 14 06:26:35.303540 containerd[1636]: time="2026-01-14T06:26:35.303484283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qmw2r,Uid:e0dc062f-6ceb-4e32-8bb4-4cc1f1f5423a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d\"" Jan 14 06:26:35.309601 containerd[1636]: time="2026-01-14T06:26:35.309429627Z" level=info msg="CreateContainer within sandbox \"f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 06:26:35.325409 containerd[1636]: time="2026-01-14T06:26:35.325266330Z" level=info msg="Container b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:35.333587 containerd[1636]: time="2026-01-14T06:26:35.333513091Z" level=info msg="CreateContainer within sandbox \"f3b665d5b9d75595f20b0a42d4621c41b9684eaaa9fdf5e016b7a76954617f4d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53\"" Jan 14 06:26:35.334739 containerd[1636]: time="2026-01-14T06:26:35.334711259Z" level=info msg="StartContainer for \"b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53\"" Jan 14 06:26:35.336971 containerd[1636]: time="2026-01-14T06:26:35.336887864Z" level=info msg="connecting to shim b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53" address="unix:///run/containerd/s/7ad06d517a3327612e9763de15fcf512caf541592e66134e4816964b9376bec5" protocol=ttrpc version=3 Jan 14 06:26:35.366881 systemd[1]: Started cri-containerd-b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53.scope - libcontainer container b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53. Jan 14 06:26:35.396286 containerd[1636]: time="2026-01-14T06:26:35.396193663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jtkkr,Uid:4a4ad63e-3b1e-41fc-944d-0d5c09453f6d,Namespace:tigera-operator,Attempt:0,}" Jan 14 06:26:35.422920 containerd[1636]: time="2026-01-14T06:26:35.422748456Z" level=info msg="connecting to shim 47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd" address="unix:///run/containerd/s/6184f134cba6ab7306b09a26144a8f1184f95af4862968afb81c8d5875bd636b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:35.428000 audit: BPF prog-id=142 op=LOAD Jan 14 06:26:35.428000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3023 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613261613061643266643038616264383765353365366539396631 Jan 14 06:26:35.428000 audit: BPF prog-id=143 op=LOAD Jan 14 06:26:35.428000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3023 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613261613061643266643038616264383765353365366539396631 Jan 14 06:26:35.428000 audit: BPF prog-id=143 op=UNLOAD Jan 14 06:26:35.428000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613261613061643266643038616264383765353365366539396631 Jan 14 06:26:35.428000 audit: BPF prog-id=142 op=UNLOAD Jan 14 06:26:35.428000 audit[3064]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3023 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613261613061643266643038616264383765353365366539396631 Jan 14 06:26:35.428000 audit: BPF prog-id=144 op=LOAD Jan 14 06:26:35.428000 audit[3064]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3023 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231613261613061643266643038616264383765353365366539396631 Jan 14 06:26:35.469057 systemd[1]: Started cri-containerd-47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd.scope - libcontainer container 47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd. Jan 14 06:26:35.476235 containerd[1636]: time="2026-01-14T06:26:35.476188486Z" level=info msg="StartContainer for \"b1a2aa0ad2fd08abd87e53e6e99f1991080250857089f73684f300792409df53\" returns successfully" Jan 14 06:26:35.503000 audit: BPF prog-id=145 op=LOAD Jan 14 06:26:35.504000 audit: BPF prog-id=146 op=LOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=146 op=UNLOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=147 op=LOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=148 op=LOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=148 op=UNLOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=147 op=UNLOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.504000 audit: BPF prog-id=149 op=LOAD Jan 14 06:26:35.504000 audit[3104]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3091 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437616136373039626366623435336663623063316132363734386566 Jan 14 06:26:35.569668 containerd[1636]: time="2026-01-14T06:26:35.569539643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jtkkr,Uid:4a4ad63e-3b1e-41fc-944d-0d5c09453f6d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd\"" Jan 14 06:26:35.572821 containerd[1636]: time="2026-01-14T06:26:35.572745518Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 06:26:35.945000 audit[3177]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:35.945000 audit[3177]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd9ff717f0 a2=0 a3=7ffd9ff717dc items=0 ppid=3077 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:26:35.948000 audit[3179]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:35.948000 audit[3179]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef47d7940 a2=0 a3=7ffef47d792c items=0 ppid=3077 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.948000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:26:35.953000 audit[3183]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:35.953000 audit[3183]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7d198a50 a2=0 a3=7fff7d198a3c items=0 ppid=3077 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 06:26:35.953000 audit[3181]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:35.953000 audit[3181]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd121a9db0 a2=0 a3=7ffd121a9d9c items=0 ppid=3077 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:26:35.955000 audit[3184]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:35.955000 audit[3184]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff87d6ee60 a2=0 a3=7fff87d6ee4c items=0 ppid=3077 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.955000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 06:26:35.960000 audit[3185]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3185 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:35.960000 audit[3185]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe17911d00 a2=0 a3=7ffe17911cec items=0 ppid=3077 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:35.960000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 06:26:35.968813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861379544.mount: Deactivated successfully. Jan 14 06:26:36.072000 audit[3186]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.072000 audit[3186]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcd98b8aa0 a2=0 a3=7ffcd98b8a8c items=0 ppid=3077 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:26:36.078000 audit[3188]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.078000 audit[3188]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe1de594b0 a2=0 a3=7ffe1de5949c items=0 ppid=3077 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 06:26:36.084000 audit[3191]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.084000 audit[3191]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd4744e360 a2=0 a3=7ffd4744e34c items=0 ppid=3077 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.084000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 06:26:36.086000 audit[3192]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.086000 audit[3192]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff8ed812d0 a2=0 a3=7fff8ed812bc items=0 ppid=3077 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.086000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:26:36.090000 audit[3194]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.090000 audit[3194]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcc0d45a80 a2=0 a3=7ffcc0d45a6c items=0 ppid=3077 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:26:36.092000 audit[3195]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.092000 audit[3195]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe79b737b0 a2=0 a3=7ffe79b7379c items=0 ppid=3077 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:26:36.096000 audit[3197]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.096000 audit[3197]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe7e089640 a2=0 a3=7ffe7e08962c items=0 ppid=3077 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:26:36.102000 audit[3200]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.102000 audit[3200]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1eb6a0a0 a2=0 a3=7ffc1eb6a08c items=0 ppid=3077 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 06:26:36.105000 audit[3201]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.105000 audit[3201]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce75bb7c0 a2=0 a3=7ffce75bb7ac items=0 ppid=3077 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:26:36.111000 audit[3203]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.111000 audit[3203]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffee50d9b10 a2=0 a3=7ffee50d9afc items=0 ppid=3077 pid=3203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:26:36.114000 audit[3204]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.114000 audit[3204]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcf650e260 a2=0 a3=7ffcf650e24c items=0 ppid=3077 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:26:36.120000 audit[3206]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.120000 audit[3206]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff1863af20 a2=0 a3=7fff1863af0c items=0 ppid=3077 pid=3206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:26:36.125000 audit[3209]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3209 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.125000 audit[3209]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd1113ce0 a2=0 a3=7fffd1113ccc items=0 ppid=3077 pid=3209 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.125000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:26:36.131000 audit[3212]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.131000 audit[3212]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe96763c70 a2=0 a3=7ffe96763c5c items=0 ppid=3077 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:26:36.133000 audit[3213]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3213 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.133000 audit[3213]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffeee277d80 a2=0 a3=7ffeee277d6c items=0 ppid=3077 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.133000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:26:36.138000 audit[3215]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3215 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.138000 audit[3215]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffeeb493150 a2=0 a3=7ffeeb49313c items=0 ppid=3077 pid=3215 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:26:36.144000 audit[3218]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.144000 audit[3218]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd030904b0 a2=0 a3=7ffd0309049c items=0 ppid=3077 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:26:36.145000 audit[3219]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.145000 audit[3219]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffece2da1d0 a2=0 a3=7ffece2da1bc items=0 ppid=3077 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.145000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:26:36.149000 audit[3221]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 06:26:36.149000 audit[3221]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff084f47a0 a2=0 a3=7fff084f478c items=0 ppid=3077 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.149000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:26:36.183000 audit[3227]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:36.183000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff5fd849e0 a2=0 a3=7fff5fd849cc items=0 ppid=3077 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.183000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:36.190000 audit[3227]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:36.190000 audit[3227]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff5fd849e0 a2=0 a3=7fff5fd849cc items=0 ppid=3077 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:36.193000 audit[3232]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.193000 audit[3232]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffc08273d80 a2=0 a3=7ffc08273d6c items=0 ppid=3077 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 06:26:36.197000 audit[3234]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3234 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.197000 audit[3234]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc3d539910 a2=0 a3=7ffc3d5398fc items=0 ppid=3077 pid=3234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 06:26:36.205000 audit[3237]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3237 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.205000 audit[3237]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffe26c919e0 a2=0 a3=7ffe26c919cc items=0 ppid=3077 pid=3237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.205000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 06:26:36.210000 audit[3238]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.210000 audit[3238]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7062edd0 a2=0 a3=7ffd7062edbc items=0 ppid=3077 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.210000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 06:26:36.214000 audit[3240]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3240 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.214000 audit[3240]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffccbd03ab0 a2=0 a3=7ffccbd03a9c items=0 ppid=3077 pid=3240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.214000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 06:26:36.216000 audit[3241]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.216000 audit[3241]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd45871840 a2=0 a3=7ffd4587182c items=0 ppid=3077 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 06:26:36.220000 audit[3243]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3243 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.220000 audit[3243]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffd5a2deb0 a2=0 a3=7fffd5a2de9c items=0 ppid=3077 pid=3243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 06:26:36.226000 audit[3246]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3246 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.226000 audit[3246]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffd906d34b0 a2=0 a3=7ffd906d349c items=0 ppid=3077 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.226000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 06:26:36.228000 audit[3247]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.228000 audit[3247]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc23cc75c0 a2=0 a3=7ffc23cc75ac items=0 ppid=3077 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.228000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 06:26:36.232000 audit[3249]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3249 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.232000 audit[3249]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffc080f100 a2=0 a3=7fffc080f0ec items=0 ppid=3077 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.232000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 06:26:36.234000 audit[3250]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.234000 audit[3250]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc846d1880 a2=0 a3=7ffc846d186c items=0 ppid=3077 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 06:26:36.238000 audit[3252]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.238000 audit[3252]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc78b17c0 a2=0 a3=7ffdc78b17ac items=0 ppid=3077 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.238000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 06:26:36.246000 audit[3255]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.246000 audit[3255]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd7b85cc0 a2=0 a3=7fffd7b85cac items=0 ppid=3077 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 06:26:36.253000 audit[3258]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3258 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.253000 audit[3258]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb303c7f0 a2=0 a3=7ffcb303c7dc items=0 ppid=3077 pid=3258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.253000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 06:26:36.255000 audit[3259]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3259 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.255000 audit[3259]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe3333dcc0 a2=0 a3=7ffe3333dcac items=0 ppid=3077 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.255000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 06:26:36.258000 audit[3261]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3261 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.258000 audit[3261]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe1e1075d0 a2=0 a3=7ffe1e1075bc items=0 ppid=3077 pid=3261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.258000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:26:36.264000 audit[3264]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3264 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.264000 audit[3264]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe48059d50 a2=0 a3=7ffe48059d3c items=0 ppid=3077 pid=3264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 06:26:36.265000 audit[3265]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.265000 audit[3265]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff7b7034c0 a2=0 a3=7fff7b7034ac items=0 ppid=3077 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 06:26:36.269000 audit[3267]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3267 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.269000 audit[3267]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed3791200 a2=0 a3=7ffed37911ec items=0 ppid=3077 pid=3267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.269000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 06:26:36.271000 audit[3268]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3268 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.271000 audit[3268]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf4d37e00 a2=0 a3=7ffcf4d37dec items=0 ppid=3077 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 06:26:36.276000 audit[3270]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.276000 audit[3270]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffde6f66600 a2=0 a3=7ffde6f665ec items=0 ppid=3077 pid=3270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:26:36.281000 audit[3273]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 06:26:36.281000 audit[3273]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb92aac20 a2=0 a3=7ffdb92aac0c items=0 ppid=3077 pid=3273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.281000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 06:26:36.288000 audit[3275]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:26:36.288000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe96a0c9e0 a2=0 a3=7ffe96a0c9cc items=0 ppid=3077 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.288000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:36.289000 audit[3275]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3275 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 06:26:36.289000 audit[3275]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe96a0c9e0 a2=0 a3=7ffe96a0c9cc items=0 ppid=3077 pid=3275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:36.289000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:36.381549 kubelet[2961]: I0114 06:26:36.381315 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qmw2r" podStartSLOduration=2.381270609 podStartE2EDuration="2.381270609s" podCreationTimestamp="2026-01-14 06:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:26:36.381164429 +0000 UTC m=+6.366923968" watchObservedRunningTime="2026-01-14 06:26:36.381270609 +0000 UTC m=+6.367030148" Jan 14 06:26:37.907432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633569750.mount: Deactivated successfully. Jan 14 06:26:40.097082 containerd[1636]: time="2026-01-14T06:26:40.096984705Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:40.098341 containerd[1636]: time="2026-01-14T06:26:40.098260547Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Jan 14 06:26:40.099603 containerd[1636]: time="2026-01-14T06:26:40.099423928Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:40.103016 containerd[1636]: time="2026-01-14T06:26:40.102774630Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:40.104378 containerd[1636]: time="2026-01-14T06:26:40.104345454Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.531537742s" Jan 14 06:26:40.104588 containerd[1636]: time="2026-01-14T06:26:40.104532631Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 14 06:26:40.109908 containerd[1636]: time="2026-01-14T06:26:40.109830855Z" level=info msg="CreateContainer within sandbox \"47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 06:26:40.123004 containerd[1636]: time="2026-01-14T06:26:40.121949147Z" level=info msg="Container 11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:40.138461 containerd[1636]: time="2026-01-14T06:26:40.138410360Z" level=info msg="CreateContainer within sandbox \"47aa6709bcfb453fcb0c1a26748efb21cb08a56c9e4b3bc38eccceb337f06ebd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b\"" Jan 14 06:26:40.139345 containerd[1636]: time="2026-01-14T06:26:40.139317534Z" level=info msg="StartContainer for \"11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b\"" Jan 14 06:26:40.141370 containerd[1636]: time="2026-01-14T06:26:40.141340363Z" level=info msg="connecting to shim 11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b" address="unix:///run/containerd/s/6184f134cba6ab7306b09a26144a8f1184f95af4862968afb81c8d5875bd636b" protocol=ttrpc version=3 Jan 14 06:26:40.180846 systemd[1]: Started cri-containerd-11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b.scope - libcontainer container 11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b. Jan 14 06:26:40.203000 audit: BPF prog-id=150 op=LOAD Jan 14 06:26:40.204000 audit: BPF prog-id=151 op=LOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=151 op=UNLOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=152 op=LOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=153 op=LOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=153 op=UNLOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=152 op=UNLOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.204000 audit: BPF prog-id=154 op=LOAD Jan 14 06:26:40.204000 audit[3285]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=3091 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:40.204000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131656137383265386161663866646164326433356366383866323366 Jan 14 06:26:40.232360 containerd[1636]: time="2026-01-14T06:26:40.232317618Z" level=info msg="StartContainer for \"11ea782e8aaf8fdad2d35cf88f23fba940790df4c32099e1c4d4bd614b0fc93b\" returns successfully" Jan 14 06:26:46.777724 systemd[1]: Started sshd@9-10.230.48.98:22-64.225.73.213:50998.service - OpenSSH per-connection server daemon (64.225.73.213:50998). Jan 14 06:26:46.790532 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 06:26:46.790898 kernel: audit: type=1130 audit(1768372006.778:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.48.98:22-64.225.73.213:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:46.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.48.98:22-64.225.73.213:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:47.074389 sshd[3346]: Invalid user weblogic from 64.225.73.213 port 50998 Jan 14 06:26:47.182035 sshd[3346]: Connection closed by invalid user weblogic 64.225.73.213 port 50998 [preauth] Jan 14 06:26:47.182000 audit[3346]: USER_ERR pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:26:47.196064 kernel: audit: type=1109 audit(1768372007.182:526): pid=3346 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:26:47.193608 systemd[1]: sshd@9-10.230.48.98:22-64.225.73.213:50998.service: Deactivated successfully. Jan 14 06:26:47.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.48.98:22-64.225.73.213:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:47.208662 kernel: audit: type=1131 audit(1768372007.196:527): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.48.98:22-64.225.73.213:50998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.006961 sudo[1926]: pam_unix(sudo:session): session closed for user root Jan 14 06:26:48.008000 audit[1926]: USER_END pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.018702 kernel: audit: type=1106 audit(1768372008.008:528): pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.027000 audit[1926]: CRED_DISP pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.033597 kernel: audit: type=1104 audit(1768372008.027:529): pid=1926 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.136296 sshd[1925]: Connection closed by 20.161.92.111 port 52154 Jan 14 06:26:48.138667 sshd-session[1921]: pam_unix(sshd:session): session closed for user core Jan 14 06:26:48.147000 audit[1921]: USER_END pid=1921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:26:48.156608 kernel: audit: type=1106 audit(1768372008.147:530): pid=1921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:26:48.160456 systemd[1]: sshd@6-10.230.48.98:22-20.161.92.111:52154.service: Deactivated successfully. Jan 14 06:26:48.148000 audit[1921]: CRED_DISP pid=1921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:26:48.168737 kernel: audit: type=1104 audit(1768372008.148:531): pid=1921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:26:48.171522 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 06:26:48.172637 systemd[1]: session-10.scope: Consumed 8.031s CPU time, 151.3M memory peak. Jan 14 06:26:48.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.48.98:22-20.161.92.111:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.186614 kernel: audit: type=1131 audit(1768372008.160:532): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.230.48.98:22-20.161.92.111:52154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:26:48.189672 systemd-logind[1607]: Session 10 logged out. Waiting for processes to exit. Jan 14 06:26:48.193442 systemd-logind[1607]: Removed session 10. Jan 14 06:26:48.778000 audit[3371]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:48.786599 kernel: audit: type=1325 audit(1768372008.778:533): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:48.778000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdab4e8ee0 a2=0 a3=7ffdab4e8ecc items=0 ppid=3077 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:48.798597 kernel: audit: type=1300 audit(1768372008.778:533): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffdab4e8ee0 a2=0 a3=7ffdab4e8ecc items=0 ppid=3077 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:48.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:48.792000 audit[3371]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3371 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:48.792000 audit[3371]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdab4e8ee0 a2=0 a3=0 items=0 ppid=3077 pid=3371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:48.792000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:49.104000 audit[3373]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:49.104000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff8e00a4e0 a2=0 a3=7fff8e00a4cc items=0 ppid=3077 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:49.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:49.111000 audit[3373]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:49.111000 audit[3373]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff8e00a4e0 a2=0 a3=0 items=0 ppid=3077 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:49.111000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.356000 audit[3375]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.364025 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 14 06:26:52.364137 kernel: audit: type=1325 audit(1768372012.356:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.356000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde0834f30 a2=0 a3=7ffde0834f1c items=0 ppid=3077 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.377588 kernel: audit: type=1300 audit(1768372012.356:537): arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffde0834f30 a2=0 a3=7ffde0834f1c items=0 ppid=3077 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.356000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.370000 audit[3375]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.384676 kernel: audit: type=1327 audit(1768372012.356:537): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.384784 kernel: audit: type=1325 audit(1768372012.370:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.370000 audit[3375]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde0834f30 a2=0 a3=0 items=0 ppid=3077 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.393608 kernel: audit: type=1300 audit(1768372012.370:538): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffde0834f30 a2=0 a3=0 items=0 ppid=3077 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.399594 kernel: audit: type=1327 audit(1768372012.370:538): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.460000 audit[3377]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.466587 kernel: audit: type=1325 audit(1768372012.460:539): table=filter:111 family=2 entries=19 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.460000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd6c05cf0 a2=0 a3=7ffdd6c05cdc items=0 ppid=3077 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.473591 kernel: audit: type=1300 audit(1768372012.460:539): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdd6c05cf0 a2=0 a3=7ffdd6c05cdc items=0 ppid=3077 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.473000 audit[3377]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.479779 kernel: audit: type=1327 audit(1768372012.460:539): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:52.481780 kernel: audit: type=1325 audit(1768372012.473:540): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:52.473000 audit[3377]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdd6c05cf0 a2=0 a3=0 items=0 ppid=3077 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:52.473000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:54.914000 audit[3379]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:54.914000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffface64770 a2=0 a3=7ffface6475c items=0 ppid=3077 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:54.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:54.920000 audit[3379]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3379 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:54.920000 audit[3379]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffface64770 a2=0 a3=0 items=0 ppid=3077 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:54.920000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:54.949000 audit[3381]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:54.949000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd6a0ebd40 a2=0 a3=7ffd6a0ebd2c items=0 ppid=3077 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:54.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:54.956000 audit[3381]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3381 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:54.956000 audit[3381]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd6a0ebd40 a2=0 a3=0 items=0 ppid=3077 pid=3381 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:54.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:55.149121 kubelet[2961]: I0114 06:26:55.148357 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jtkkr" podStartSLOduration=16.614072547 podStartE2EDuration="21.148273597s" podCreationTimestamp="2026-01-14 06:26:34 +0000 UTC" firstStartedPulling="2026-01-14 06:26:35.571864891 +0000 UTC m=+5.557624417" lastFinishedPulling="2026-01-14 06:26:40.106065935 +0000 UTC m=+10.091825467" observedRunningTime="2026-01-14 06:26:40.38917642 +0000 UTC m=+10.374935958" watchObservedRunningTime="2026-01-14 06:26:55.148273597 +0000 UTC m=+25.134033138" Jan 14 06:26:55.174975 systemd[1]: Created slice kubepods-besteffort-podf16c1f57_0f3e_43a7_8bf7_8253f767b115.slice - libcontainer container kubepods-besteffort-podf16c1f57_0f3e_43a7_8bf7_8253f767b115.slice. Jan 14 06:26:55.182676 kubelet[2961]: I0114 06:26:55.182512 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f16c1f57-0f3e-43a7-8bf7-8253f767b115-tigera-ca-bundle\") pod \"calico-typha-75cddc6d9-gt252\" (UID: \"f16c1f57-0f3e-43a7-8bf7-8253f767b115\") " pod="calico-system/calico-typha-75cddc6d9-gt252" Jan 14 06:26:55.183050 kubelet[2961]: I0114 06:26:55.182994 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdfrl\" (UniqueName: \"kubernetes.io/projected/f16c1f57-0f3e-43a7-8bf7-8253f767b115-kube-api-access-tdfrl\") pod \"calico-typha-75cddc6d9-gt252\" (UID: \"f16c1f57-0f3e-43a7-8bf7-8253f767b115\") " pod="calico-system/calico-typha-75cddc6d9-gt252" Jan 14 06:26:55.184580 kubelet[2961]: I0114 06:26:55.183278 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f16c1f57-0f3e-43a7-8bf7-8253f767b115-typha-certs\") pod \"calico-typha-75cddc6d9-gt252\" (UID: \"f16c1f57-0f3e-43a7-8bf7-8253f767b115\") " pod="calico-system/calico-typha-75cddc6d9-gt252" Jan 14 06:26:55.265240 systemd[1]: Created slice kubepods-besteffort-poddeabdd24_8ab3_4744_9c8a_d760eb1a7fd3.slice - libcontainer container kubepods-besteffort-poddeabdd24_8ab3_4744_9c8a_d760eb1a7fd3.slice. Jan 14 06:26:55.284247 kubelet[2961]: I0114 06:26:55.284197 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-cni-net-dir\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284247 kubelet[2961]: I0114 06:26:55.284249 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-node-certs\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284445 kubelet[2961]: I0114 06:26:55.284277 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-tigera-ca-bundle\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284445 kubelet[2961]: I0114 06:26:55.284302 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-var-run-calico\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284445 kubelet[2961]: I0114 06:26:55.284353 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-xtables-lock\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284445 kubelet[2961]: I0114 06:26:55.284414 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-cni-bin-dir\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.284445 kubelet[2961]: I0114 06:26:55.284440 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-policysync\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.285592 kubelet[2961]: I0114 06:26:55.284464 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-var-lib-calico\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.285592 kubelet[2961]: I0114 06:26:55.284491 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-cni-log-dir\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.285592 kubelet[2961]: I0114 06:26:55.284515 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvkzn\" (UniqueName: \"kubernetes.io/projected/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-kube-api-access-fvkzn\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.288316 kubelet[2961]: I0114 06:26:55.284554 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-flexvol-driver-host\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.288316 kubelet[2961]: I0114 06:26:55.287546 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/deabdd24-8ab3-4744-9c8a-d760eb1a7fd3-lib-modules\") pod \"calico-node-f7fv2\" (UID: \"deabdd24-8ab3-4744-9c8a-d760eb1a7fd3\") " pod="calico-system/calico-node-f7fv2" Jan 14 06:26:55.390089 kubelet[2961]: E0114 06:26:55.390046 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.390089 kubelet[2961]: W0114 06:26:55.390081 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.390297 kubelet[2961]: E0114 06:26:55.390148 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.390992 kubelet[2961]: E0114 06:26:55.390958 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.390992 kubelet[2961]: W0114 06:26:55.390987 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.391153 kubelet[2961]: E0114 06:26:55.391005 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.391575 kubelet[2961]: E0114 06:26:55.391529 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.391703 kubelet[2961]: W0114 06:26:55.391552 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.391778 kubelet[2961]: E0114 06:26:55.391706 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.392126 kubelet[2961]: E0114 06:26:55.392089 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.392126 kubelet[2961]: W0114 06:26:55.392122 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.392252 kubelet[2961]: E0114 06:26:55.392138 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.392860 kubelet[2961]: E0114 06:26:55.392826 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.392860 kubelet[2961]: W0114 06:26:55.392845 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.392860 kubelet[2961]: E0114 06:26:55.392860 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.393751 kubelet[2961]: E0114 06:26:55.393705 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.393751 kubelet[2961]: W0114 06:26:55.393718 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.393751 kubelet[2961]: E0114 06:26:55.393733 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.394001 kubelet[2961]: E0114 06:26:55.393980 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.394001 kubelet[2961]: W0114 06:26:55.393998 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.394155 kubelet[2961]: E0114 06:26:55.394013 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.394340 kubelet[2961]: E0114 06:26:55.394318 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.394340 kubelet[2961]: W0114 06:26:55.394338 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.394436 kubelet[2961]: E0114 06:26:55.394353 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.398004 kubelet[2961]: E0114 06:26:55.397749 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.398004 kubelet[2961]: W0114 06:26:55.397769 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.398004 kubelet[2961]: E0114 06:26:55.397785 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.399469 kubelet[2961]: E0114 06:26:55.399432 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.399469 kubelet[2961]: W0114 06:26:55.399452 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.399469 kubelet[2961]: E0114 06:26:55.399467 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.399924 kubelet[2961]: E0114 06:26:55.399900 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.399924 kubelet[2961]: W0114 06:26:55.399922 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.400043 kubelet[2961]: E0114 06:26:55.399937 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.400451 kubelet[2961]: E0114 06:26:55.400331 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.400451 kubelet[2961]: W0114 06:26:55.400350 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.400451 kubelet[2961]: E0114 06:26:55.400365 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.401589 kubelet[2961]: E0114 06:26:55.401495 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.401589 kubelet[2961]: W0114 06:26:55.401517 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.401589 kubelet[2961]: E0114 06:26:55.401534 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.401589 kubelet[2961]: E0114 06:26:55.401526 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:26:55.402004 kubelet[2961]: E0114 06:26:55.401863 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.402004 kubelet[2961]: W0114 06:26:55.401877 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.402004 kubelet[2961]: E0114 06:26:55.401893 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.402157 kubelet[2961]: E0114 06:26:55.402132 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.402157 kubelet[2961]: W0114 06:26:55.402145 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.402246 kubelet[2961]: E0114 06:26:55.402160 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.402936 kubelet[2961]: E0114 06:26:55.402472 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.402936 kubelet[2961]: W0114 06:26:55.402492 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.402936 kubelet[2961]: E0114 06:26:55.402521 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.404648 kubelet[2961]: E0114 06:26:55.403344 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.404648 kubelet[2961]: W0114 06:26:55.403365 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.404648 kubelet[2961]: E0114 06:26:55.403381 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.405071 kubelet[2961]: E0114 06:26:55.405049 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.405141 kubelet[2961]: W0114 06:26:55.405069 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.405141 kubelet[2961]: E0114 06:26:55.405090 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.408516 kubelet[2961]: E0114 06:26:55.408476 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.408516 kubelet[2961]: W0114 06:26:55.408510 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.408690 kubelet[2961]: E0114 06:26:55.408527 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.409186 kubelet[2961]: E0114 06:26:55.409161 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.409186 kubelet[2961]: W0114 06:26:55.409182 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.409318 kubelet[2961]: E0114 06:26:55.409199 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.410798 kubelet[2961]: E0114 06:26:55.410771 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.410798 kubelet[2961]: W0114 06:26:55.410790 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.411037 kubelet[2961]: E0114 06:26:55.410805 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.411648 kubelet[2961]: E0114 06:26:55.411603 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.411648 kubelet[2961]: W0114 06:26:55.411635 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.411730 kubelet[2961]: E0114 06:26:55.411651 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.413421 kubelet[2961]: E0114 06:26:55.413387 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.413421 kubelet[2961]: W0114 06:26:55.413409 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.413721 kubelet[2961]: E0114 06:26:55.413426 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.416329 kubelet[2961]: E0114 06:26:55.416045 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.416329 kubelet[2961]: W0114 06:26:55.416067 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.416329 kubelet[2961]: E0114 06:26:55.416086 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.421835 kubelet[2961]: E0114 06:26:55.421752 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.421835 kubelet[2961]: W0114 06:26:55.421773 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.421835 kubelet[2961]: E0114 06:26:55.421802 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.443777 kubelet[2961]: E0114 06:26:55.442194 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.443777 kubelet[2961]: W0114 06:26:55.442525 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.443777 kubelet[2961]: E0114 06:26:55.442552 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.458505 kubelet[2961]: E0114 06:26:55.458459 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.458505 kubelet[2961]: W0114 06:26:55.458494 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.458780 kubelet[2961]: E0114 06:26:55.458538 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.458905 kubelet[2961]: E0114 06:26:55.458885 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.458905 kubelet[2961]: W0114 06:26:55.458903 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.459045 kubelet[2961]: E0114 06:26:55.458937 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.459288 kubelet[2961]: E0114 06:26:55.459256 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.459351 kubelet[2961]: W0114 06:26:55.459275 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.459351 kubelet[2961]: E0114 06:26:55.459318 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.459816 kubelet[2961]: E0114 06:26:55.459699 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.459816 kubelet[2961]: W0114 06:26:55.459724 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.459816 kubelet[2961]: E0114 06:26:55.459739 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.460245 kubelet[2961]: E0114 06:26:55.460212 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.460302 kubelet[2961]: W0114 06:26:55.460264 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.460302 kubelet[2961]: E0114 06:26:55.460285 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.460923 kubelet[2961]: E0114 06:26:55.460901 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.460923 kubelet[2961]: W0114 06:26:55.460919 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.461056 kubelet[2961]: E0114 06:26:55.460935 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.461190 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.462628 kubelet[2961]: W0114 06:26:55.461624 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.461647 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.461928 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.462628 kubelet[2961]: W0114 06:26:55.461941 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.461985 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.462314 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.462628 kubelet[2961]: W0114 06:26:55.462327 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.462628 kubelet[2961]: E0114 06:26:55.462341 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.463703 kubelet[2961]: E0114 06:26:55.463680 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.463703 kubelet[2961]: W0114 06:26:55.463701 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.463831 kubelet[2961]: E0114 06:26:55.463718 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.463986 kubelet[2961]: E0114 06:26:55.463968 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.463986 kubelet[2961]: W0114 06:26:55.463983 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.464116 kubelet[2961]: E0114 06:26:55.464005 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.464284 kubelet[2961]: E0114 06:26:55.464265 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.464284 kubelet[2961]: W0114 06:26:55.464282 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.464396 kubelet[2961]: E0114 06:26:55.464309 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.464571 kubelet[2961]: E0114 06:26:55.464537 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.464649 kubelet[2961]: W0114 06:26:55.464554 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.464649 kubelet[2961]: E0114 06:26:55.464616 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.464884 kubelet[2961]: E0114 06:26:55.464864 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.464884 kubelet[2961]: W0114 06:26:55.464881 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.465001 kubelet[2961]: E0114 06:26:55.464896 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.465179 kubelet[2961]: E0114 06:26:55.465158 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.465179 kubelet[2961]: W0114 06:26:55.465176 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.465286 kubelet[2961]: E0114 06:26:55.465191 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.465802 kubelet[2961]: E0114 06:26:55.465783 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.465802 kubelet[2961]: W0114 06:26:55.465800 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.466016 kubelet[2961]: E0114 06:26:55.465815 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.466753 kubelet[2961]: E0114 06:26:55.466732 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.466753 kubelet[2961]: W0114 06:26:55.466751 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.466886 kubelet[2961]: E0114 06:26:55.466766 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.467056 kubelet[2961]: E0114 06:26:55.467038 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.467056 kubelet[2961]: W0114 06:26:55.467055 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.467235 kubelet[2961]: E0114 06:26:55.467069 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.467321 kubelet[2961]: E0114 06:26:55.467300 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.467321 kubelet[2961]: W0114 06:26:55.467319 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.467549 kubelet[2961]: E0114 06:26:55.467333 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.467633 kubelet[2961]: E0114 06:26:55.467608 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.467633 kubelet[2961]: W0114 06:26:55.467622 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.467715 kubelet[2961]: E0114 06:26:55.467636 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.485847 containerd[1636]: time="2026-01-14T06:26:55.485761600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cddc6d9-gt252,Uid:f16c1f57-0f3e-43a7-8bf7-8253f767b115,Namespace:calico-system,Attempt:0,}" Jan 14 06:26:55.496244 kubelet[2961]: E0114 06:26:55.496215 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.496244 kubelet[2961]: W0114 06:26:55.496239 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.496865 kubelet[2961]: E0114 06:26:55.496260 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.496865 kubelet[2961]: I0114 06:26:55.496297 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a5a91150-6e37-4bc7-abb4-c895c0d189ea-varrun\") pod \"csi-node-driver-7g86f\" (UID: \"a5a91150-6e37-4bc7-abb4-c895c0d189ea\") " pod="calico-system/csi-node-driver-7g86f" Jan 14 06:26:55.497521 kubelet[2961]: E0114 06:26:55.497478 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.497939 kubelet[2961]: W0114 06:26:55.497625 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.497939 kubelet[2961]: E0114 06:26:55.497654 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.497939 kubelet[2961]: I0114 06:26:55.497697 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmpb6\" (UniqueName: \"kubernetes.io/projected/a5a91150-6e37-4bc7-abb4-c895c0d189ea-kube-api-access-qmpb6\") pod \"csi-node-driver-7g86f\" (UID: \"a5a91150-6e37-4bc7-abb4-c895c0d189ea\") " pod="calico-system/csi-node-driver-7g86f" Jan 14 06:26:55.498888 kubelet[2961]: E0114 06:26:55.498868 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.499302 kubelet[2961]: W0114 06:26:55.499054 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.499782 kubelet[2961]: E0114 06:26:55.499508 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.500443 kubelet[2961]: E0114 06:26:55.500286 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.500443 kubelet[2961]: W0114 06:26:55.500343 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.500443 kubelet[2961]: E0114 06:26:55.500363 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.501218 kubelet[2961]: E0114 06:26:55.501114 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.501218 kubelet[2961]: W0114 06:26:55.501210 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.501588 kubelet[2961]: E0114 06:26:55.501229 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.501588 kubelet[2961]: I0114 06:26:55.501450 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a5a91150-6e37-4bc7-abb4-c895c0d189ea-registration-dir\") pod \"csi-node-driver-7g86f\" (UID: \"a5a91150-6e37-4bc7-abb4-c895c0d189ea\") " pod="calico-system/csi-node-driver-7g86f" Jan 14 06:26:55.502253 kubelet[2961]: E0114 06:26:55.501590 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.502253 kubelet[2961]: W0114 06:26:55.502250 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.502386 kubelet[2961]: E0114 06:26:55.502266 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.502837 kubelet[2961]: E0114 06:26:55.502807 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.502837 kubelet[2961]: W0114 06:26:55.502827 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.503342 kubelet[2961]: E0114 06:26:55.502901 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.504653 kubelet[2961]: E0114 06:26:55.504630 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.504653 kubelet[2961]: W0114 06:26:55.504650 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.504653 kubelet[2961]: E0114 06:26:55.504666 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.504827 kubelet[2961]: I0114 06:26:55.504698 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a5a91150-6e37-4bc7-abb4-c895c0d189ea-socket-dir\") pod \"csi-node-driver-7g86f\" (UID: \"a5a91150-6e37-4bc7-abb4-c895c0d189ea\") " pod="calico-system/csi-node-driver-7g86f" Jan 14 06:26:55.507906 kubelet[2961]: E0114 06:26:55.507843 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.507906 kubelet[2961]: W0114 06:26:55.507866 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.507906 kubelet[2961]: E0114 06:26:55.507882 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.510000 kubelet[2961]: I0114 06:26:55.509913 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a5a91150-6e37-4bc7-abb4-c895c0d189ea-kubelet-dir\") pod \"csi-node-driver-7g86f\" (UID: \"a5a91150-6e37-4bc7-abb4-c895c0d189ea\") " pod="calico-system/csi-node-driver-7g86f" Jan 14 06:26:55.510763 kubelet[2961]: E0114 06:26:55.510740 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.510763 kubelet[2961]: W0114 06:26:55.510759 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.510876 kubelet[2961]: E0114 06:26:55.510787 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.515171 kubelet[2961]: E0114 06:26:55.514818 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.515171 kubelet[2961]: W0114 06:26:55.514836 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.515171 kubelet[2961]: E0114 06:26:55.514850 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.519481 kubelet[2961]: E0114 06:26:55.519079 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.519481 kubelet[2961]: W0114 06:26:55.519097 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.519481 kubelet[2961]: E0114 06:26:55.519113 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.519481 kubelet[2961]: E0114 06:26:55.519391 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.519481 kubelet[2961]: W0114 06:26:55.519404 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.519481 kubelet[2961]: E0114 06:26:55.519430 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.520783 kubelet[2961]: E0114 06:26:55.519810 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.520783 kubelet[2961]: W0114 06:26:55.519826 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.520783 kubelet[2961]: E0114 06:26:55.519889 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.521760 kubelet[2961]: E0114 06:26:55.521727 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.521760 kubelet[2961]: W0114 06:26:55.521757 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.521890 kubelet[2961]: E0114 06:26:55.521772 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.560897 containerd[1636]: time="2026-01-14T06:26:55.560676839Z" level=info msg="connecting to shim 570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0" address="unix:///run/containerd/s/0a9ffdb49793d25cd4816a1d9ecdc71b2a7a6c59b9a35e203e84a2af253f23be" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:55.574794 containerd[1636]: time="2026-01-14T06:26:55.574588438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7fv2,Uid:deabdd24-8ab3-4744-9c8a-d760eb1a7fd3,Namespace:calico-system,Attempt:0,}" Jan 14 06:26:55.611496 kubelet[2961]: E0114 06:26:55.611456 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.611496 kubelet[2961]: W0114 06:26:55.611488 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.612068 kubelet[2961]: E0114 06:26:55.611515 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.612799 kubelet[2961]: E0114 06:26:55.612586 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.612799 kubelet[2961]: W0114 06:26:55.612606 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.612799 kubelet[2961]: E0114 06:26:55.612622 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.613106 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.614126 kubelet[2961]: W0114 06:26:55.613128 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.613143 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.613595 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.614126 kubelet[2961]: W0114 06:26:55.613614 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.613629 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.614056 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.614126 kubelet[2961]: W0114 06:26:55.614070 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.614126 kubelet[2961]: E0114 06:26:55.614085 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.614508 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.615272 kubelet[2961]: W0114 06:26:55.614522 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.614584 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.614881 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.615272 kubelet[2961]: W0114 06:26:55.614895 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.614910 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.615188 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.615272 kubelet[2961]: W0114 06:26:55.615202 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.615272 kubelet[2961]: E0114 06:26:55.615216 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.615657 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.616853 kubelet[2961]: W0114 06:26:55.615671 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.615685 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.616070 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.616853 kubelet[2961]: W0114 06:26:55.616087 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.616102 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.616404 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.616853 kubelet[2961]: W0114 06:26:55.616417 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.616431 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.616853 kubelet[2961]: E0114 06:26:55.616821 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.617306 kubelet[2961]: W0114 06:26:55.616835 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.617306 kubelet[2961]: E0114 06:26:55.616849 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.617741 kubelet[2961]: E0114 06:26:55.617700 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.617741 kubelet[2961]: W0114 06:26:55.617719 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.617741 kubelet[2961]: E0114 06:26:55.617734 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.618481 kubelet[2961]: E0114 06:26:55.618458 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.618481 kubelet[2961]: W0114 06:26:55.618477 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.619607 kubelet[2961]: E0114 06:26:55.618493 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.619946 kubelet[2961]: E0114 06:26:55.619871 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.619946 kubelet[2961]: W0114 06:26:55.619891 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.619946 kubelet[2961]: E0114 06:26:55.619906 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.620769 kubelet[2961]: E0114 06:26:55.620370 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.620769 kubelet[2961]: W0114 06:26:55.620485 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.620769 kubelet[2961]: E0114 06:26:55.620508 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.621230 kubelet[2961]: E0114 06:26:55.621027 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.621230 kubelet[2961]: W0114 06:26:55.621041 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.621230 kubelet[2961]: E0114 06:26:55.621057 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.622484 kubelet[2961]: E0114 06:26:55.622085 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.622484 kubelet[2961]: W0114 06:26:55.622215 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.622484 kubelet[2961]: E0114 06:26:55.622235 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.624495 kubelet[2961]: E0114 06:26:55.622864 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.624495 kubelet[2961]: W0114 06:26:55.622878 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.624495 kubelet[2961]: E0114 06:26:55.622893 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.624890 kubelet[2961]: E0114 06:26:55.624817 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.624890 kubelet[2961]: W0114 06:26:55.624832 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.624890 kubelet[2961]: E0114 06:26:55.624846 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.625542 kubelet[2961]: E0114 06:26:55.625477 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.625542 kubelet[2961]: W0114 06:26:55.625496 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.625542 kubelet[2961]: E0114 06:26:55.625512 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.629254 kubelet[2961]: E0114 06:26:55.628874 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.629254 kubelet[2961]: W0114 06:26:55.628904 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.629254 kubelet[2961]: E0114 06:26:55.628922 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.630243 kubelet[2961]: E0114 06:26:55.630219 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.630243 kubelet[2961]: W0114 06:26:55.630239 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.630873 kubelet[2961]: E0114 06:26:55.630256 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.630873 kubelet[2961]: E0114 06:26:55.630711 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.630873 kubelet[2961]: W0114 06:26:55.630725 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.630873 kubelet[2961]: E0114 06:26:55.630740 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.631512 kubelet[2961]: E0114 06:26:55.631158 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.631512 kubelet[2961]: W0114 06:26:55.631172 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.631512 kubelet[2961]: E0114 06:26:55.631204 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.659867 systemd[1]: Started cri-containerd-570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0.scope - libcontainer container 570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0. Jan 14 06:26:55.674407 kubelet[2961]: E0114 06:26:55.674375 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:26:55.674838 kubelet[2961]: W0114 06:26:55.674578 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:26:55.674838 kubelet[2961]: E0114 06:26:55.674633 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:26:55.679758 containerd[1636]: time="2026-01-14T06:26:55.679698308Z" level=info msg="connecting to shim 11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408" address="unix:///run/containerd/s/78a0b0205ccdd775d1fc7b7f412210c39e9428eae9f4511abe68e13a88291a43" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:26:55.705000 audit: BPF prog-id=155 op=LOAD Jan 14 06:26:55.711000 audit: BPF prog-id=156 op=LOAD Jan 14 06:26:55.711000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186238 a2=98 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.711000 audit: BPF prog-id=156 op=UNLOAD Jan 14 06:26:55.711000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.711000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.712000 audit: BPF prog-id=157 op=LOAD Jan 14 06:26:55.712000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000186488 a2=98 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.712000 audit: BPF prog-id=158 op=LOAD Jan 14 06:26:55.712000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000186218 a2=98 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.712000 audit: BPF prog-id=158 op=UNLOAD Jan 14 06:26:55.712000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.712000 audit: BPF prog-id=157 op=UNLOAD Jan 14 06:26:55.712000 audit[3479]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.712000 audit: BPF prog-id=159 op=LOAD Jan 14 06:26:55.712000 audit[3479]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001866e8 a2=98 a3=0 items=0 ppid=3468 pid=3479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.712000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306663396235623632666534363464343062636132393432613535 Jan 14 06:26:55.756329 systemd[1]: Started cri-containerd-11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408.scope - libcontainer container 11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408. Jan 14 06:26:55.817000 audit: BPF prog-id=160 op=LOAD Jan 14 06:26:55.818000 audit: BPF prog-id=161 op=LOAD Jan 14 06:26:55.818000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.819000 audit: BPF prog-id=161 op=UNLOAD Jan 14 06:26:55.819000 audit[3541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.822000 audit: BPF prog-id=162 op=LOAD Jan 14 06:26:55.822000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.823000 audit: BPF prog-id=163 op=LOAD Jan 14 06:26:55.823000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.823000 audit: BPF prog-id=163 op=UNLOAD Jan 14 06:26:55.823000 audit[3541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.824000 audit: BPF prog-id=162 op=UNLOAD Jan 14 06:26:55.824000 audit[3541]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.824000 audit: BPF prog-id=164 op=LOAD Jan 14 06:26:55.824000 audit[3541]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3527 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131646131663665363537306433346563333062373731303631333431 Jan 14 06:26:55.829104 containerd[1636]: time="2026-01-14T06:26:55.829028782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75cddc6d9-gt252,Uid:f16c1f57-0f3e-43a7-8bf7-8253f767b115,Namespace:calico-system,Attempt:0,} returns sandbox id \"570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0\"" Jan 14 06:26:55.835949 containerd[1636]: time="2026-01-14T06:26:55.835891134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 06:26:55.863605 containerd[1636]: time="2026-01-14T06:26:55.863513952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f7fv2,Uid:deabdd24-8ab3-4744-9c8a-d760eb1a7fd3,Namespace:calico-system,Attempt:0,} returns sandbox id \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\"" Jan 14 06:26:55.977000 audit[3578]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:55.977000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffba9fe040 a2=0 a3=7fffba9fe02c items=0 ppid=3077 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:55.980000 audit[3578]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:26:55.980000 audit[3578]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffba9fe040 a2=0 a3=0 items=0 ppid=3077 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:55.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:26:57.283373 kubelet[2961]: E0114 06:26:57.283213 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:26:57.453334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76948599.mount: Deactivated successfully. Jan 14 06:26:59.283368 kubelet[2961]: E0114 06:26:59.283293 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:26:59.342705 containerd[1636]: time="2026-01-14T06:26:59.342648638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:59.344055 containerd[1636]: time="2026-01-14T06:26:59.343825045Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Jan 14 06:26:59.346797 containerd[1636]: time="2026-01-14T06:26:59.346356637Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:59.366698 containerd[1636]: time="2026-01-14T06:26:59.366643701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:26:59.368875 containerd[1636]: time="2026-01-14T06:26:59.368842637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.532246194s" Jan 14 06:26:59.368945 containerd[1636]: time="2026-01-14T06:26:59.368881504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 14 06:26:59.370396 containerd[1636]: time="2026-01-14T06:26:59.370354734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 06:26:59.403385 containerd[1636]: time="2026-01-14T06:26:59.402930805Z" level=info msg="CreateContainer within sandbox \"570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 06:26:59.414595 containerd[1636]: time="2026-01-14T06:26:59.410770578Z" level=info msg="Container 834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:26:59.421187 containerd[1636]: time="2026-01-14T06:26:59.421143451Z" level=info msg="CreateContainer within sandbox \"570fc9b5b62fe464d40bca2942a5581e2cda25b8e6f2c5b78cb449fa93d16da0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e\"" Jan 14 06:26:59.422688 containerd[1636]: time="2026-01-14T06:26:59.422230430Z" level=info msg="StartContainer for \"834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e\"" Jan 14 06:26:59.423855 containerd[1636]: time="2026-01-14T06:26:59.423739307Z" level=info msg="connecting to shim 834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e" address="unix:///run/containerd/s/0a9ffdb49793d25cd4816a1d9ecdc71b2a7a6c59b9a35e203e84a2af253f23be" protocol=ttrpc version=3 Jan 14 06:26:59.462824 systemd[1]: Started cri-containerd-834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e.scope - libcontainer container 834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e. Jan 14 06:26:59.491000 audit: BPF prog-id=165 op=LOAD Jan 14 06:26:59.498903 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 06:26:59.499022 kernel: audit: type=1334 audit(1768372019.491:563): prog-id=165 op=LOAD Jan 14 06:26:59.499000 audit: BPF prog-id=166 op=LOAD Jan 14 06:26:59.501586 kernel: audit: type=1334 audit(1768372019.499:564): prog-id=166 op=LOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.507593 kernel: audit: type=1300 audit(1768372019.499:564): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.513792 kernel: audit: type=1327 audit(1768372019.499:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=166 op=UNLOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.517970 kernel: audit: type=1334 audit(1768372019.499:565): prog-id=166 op=UNLOAD Jan 14 06:26:59.518072 kernel: audit: type=1300 audit(1768372019.499:565): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.523046 kernel: audit: type=1327 audit(1768372019.499:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=167 op=LOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.533855 kernel: audit: type=1334 audit(1768372019.499:566): prog-id=167 op=LOAD Jan 14 06:26:59.533928 kernel: audit: type=1300 audit(1768372019.499:566): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=168 op=LOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=168 op=UNLOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=167 op=UNLOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.544647 kernel: audit: type=1327 audit(1768372019.499:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.499000 audit: BPF prog-id=169 op=LOAD Jan 14 06:26:59.499000 audit[3589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3468 pid=3589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:26:59.499000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833343331316161343165323736346665636536616566386431316138 Jan 14 06:26:59.579902 containerd[1636]: time="2026-01-14T06:26:59.579781390Z" level=info msg="StartContainer for \"834311aa41e2764fece6aef8d11a88b2c9070e5111f12b4392e40f10fe59a27e\" returns successfully" Jan 14 06:27:00.506301 kubelet[2961]: E0114 06:27:00.506222 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.506301 kubelet[2961]: W0114 06:27:00.506251 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.508765 kubelet[2961]: E0114 06:27:00.507082 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.508765 kubelet[2961]: E0114 06:27:00.508687 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.508765 kubelet[2961]: W0114 06:27:00.508704 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.508765 kubelet[2961]: E0114 06:27:00.508721 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.508970 kubelet[2961]: E0114 06:27:00.508961 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.509028 kubelet[2961]: W0114 06:27:00.508974 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.509028 kubelet[2961]: E0114 06:27:00.508988 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.509258 kubelet[2961]: E0114 06:27:00.509232 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.509258 kubelet[2961]: W0114 06:27:00.509251 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.509719 kubelet[2961]: E0114 06:27:00.509266 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.509719 kubelet[2961]: E0114 06:27:00.509650 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.509719 kubelet[2961]: W0114 06:27:00.509663 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.509719 kubelet[2961]: E0114 06:27:00.509677 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.510269 kubelet[2961]: E0114 06:27:00.510248 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.510269 kubelet[2961]: W0114 06:27:00.510266 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.510380 kubelet[2961]: E0114 06:27:00.510285 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.510639 kubelet[2961]: E0114 06:27:00.510619 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.510639 kubelet[2961]: W0114 06:27:00.510637 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.510764 kubelet[2961]: E0114 06:27:00.510652 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.510868 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.511635 kubelet[2961]: W0114 06:27:00.510887 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.510902 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.511184 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.511635 kubelet[2961]: W0114 06:27:00.511202 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.511250 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.511541 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.511635 kubelet[2961]: W0114 06:27:00.511565 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.511635 kubelet[2961]: E0114 06:27:00.511580 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.512017 kubelet[2961]: E0114 06:27:00.511808 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.512017 kubelet[2961]: W0114 06:27:00.511820 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.512017 kubelet[2961]: E0114 06:27:00.511836 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.512155 kubelet[2961]: E0114 06:27:00.512035 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.512155 kubelet[2961]: W0114 06:27:00.512047 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.512155 kubelet[2961]: E0114 06:27:00.512060 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.512331 kubelet[2961]: E0114 06:27:00.512283 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.512331 kubelet[2961]: W0114 06:27:00.512294 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.512331 kubelet[2961]: E0114 06:27:00.512306 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.512781 kubelet[2961]: E0114 06:27:00.512539 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.512781 kubelet[2961]: W0114 06:27:00.512555 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.512781 kubelet[2961]: E0114 06:27:00.512652 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.513178 kubelet[2961]: E0114 06:27:00.512912 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.513178 kubelet[2961]: W0114 06:27:00.512924 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.513178 kubelet[2961]: E0114 06:27:00.512943 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.550454 kubelet[2961]: E0114 06:27:00.550405 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.550454 kubelet[2961]: W0114 06:27:00.550445 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.550745 kubelet[2961]: E0114 06:27:00.550470 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.551665 kubelet[2961]: E0114 06:27:00.551643 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.551665 kubelet[2961]: W0114 06:27:00.551662 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.551839 kubelet[2961]: E0114 06:27:00.551678 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.552112 kubelet[2961]: E0114 06:27:00.551965 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.552112 kubelet[2961]: W0114 06:27:00.551979 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.552112 kubelet[2961]: E0114 06:27:00.551993 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.552735 kubelet[2961]: E0114 06:27:00.552576 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.552735 kubelet[2961]: W0114 06:27:00.552626 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.552735 kubelet[2961]: E0114 06:27:00.552646 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.553204 kubelet[2961]: E0114 06:27:00.553089 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.553204 kubelet[2961]: W0114 06:27:00.553116 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.553204 kubelet[2961]: E0114 06:27:00.553132 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.553675 kubelet[2961]: E0114 06:27:00.553626 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.553859 kubelet[2961]: W0114 06:27:00.553644 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.553859 kubelet[2961]: E0114 06:27:00.553776 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.554337 kubelet[2961]: E0114 06:27:00.554319 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.554554 kubelet[2961]: W0114 06:27:00.554443 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.554554 kubelet[2961]: E0114 06:27:00.554468 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.554946 kubelet[2961]: E0114 06:27:00.554928 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.555219 kubelet[2961]: W0114 06:27:00.555029 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.555219 kubelet[2961]: E0114 06:27:00.555055 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.555438 kubelet[2961]: E0114 06:27:00.555421 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.555781 kubelet[2961]: W0114 06:27:00.555684 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.555781 kubelet[2961]: E0114 06:27:00.555709 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.556119 kubelet[2961]: E0114 06:27:00.556086 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.556594 kubelet[2961]: W0114 06:27:00.556103 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.556594 kubelet[2961]: E0114 06:27:00.556212 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.557097 kubelet[2961]: E0114 06:27:00.557079 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.557357 kubelet[2961]: W0114 06:27:00.557194 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.557357 kubelet[2961]: E0114 06:27:00.557217 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.558791 kubelet[2961]: E0114 06:27:00.558644 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.558791 kubelet[2961]: W0114 06:27:00.558663 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.558791 kubelet[2961]: E0114 06:27:00.558679 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.559030 kubelet[2961]: E0114 06:27:00.559011 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.559131 kubelet[2961]: W0114 06:27:00.559111 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.559366 kubelet[2961]: E0114 06:27:00.559228 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.560195 kubelet[2961]: E0114 06:27:00.560177 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.560533 kubelet[2961]: W0114 06:27:00.560293 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.560533 kubelet[2961]: E0114 06:27:00.560323 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.560771 kubelet[2961]: E0114 06:27:00.560753 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.561181 kubelet[2961]: W0114 06:27:00.560853 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.561181 kubelet[2961]: E0114 06:27:00.560877 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.561293 kubelet[2961]: E0114 06:27:00.561224 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.561293 kubelet[2961]: W0114 06:27:00.561239 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.561293 kubelet[2961]: E0114 06:27:00.561255 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.562049 kubelet[2961]: E0114 06:27:00.562030 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.562160 kubelet[2961]: W0114 06:27:00.562139 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.562276 kubelet[2961]: E0114 06:27:00.562238 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:00.562612 kubelet[2961]: E0114 06:27:00.562584 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:00.562682 kubelet[2961]: W0114 06:27:00.562623 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:00.562682 kubelet[2961]: E0114 06:27:00.562643 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.283704 kubelet[2961]: E0114 06:27:01.283624 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:01.430110 containerd[1636]: time="2026-01-14T06:27:01.429119960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:01.430110 containerd[1636]: time="2026-01-14T06:27:01.430055441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4442579" Jan 14 06:27:01.430798 containerd[1636]: time="2026-01-14T06:27:01.430768066Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:01.433237 containerd[1636]: time="2026-01-14T06:27:01.433207008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:01.434384 containerd[1636]: time="2026-01-14T06:27:01.434329987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.063933484s" Jan 14 06:27:01.434483 containerd[1636]: time="2026-01-14T06:27:01.434386869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 14 06:27:01.440149 containerd[1636]: time="2026-01-14T06:27:01.440118731Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 06:27:01.448627 kubelet[2961]: I0114 06:27:01.448526 2961 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:27:01.452669 containerd[1636]: time="2026-01-14T06:27:01.452630567Z" level=info msg="Container e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:27:01.462225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393191713.mount: Deactivated successfully. Jan 14 06:27:01.474382 containerd[1636]: time="2026-01-14T06:27:01.474113989Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b\"" Jan 14 06:27:01.477032 containerd[1636]: time="2026-01-14T06:27:01.476770080Z" level=info msg="StartContainer for \"e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b\"" Jan 14 06:27:01.479360 containerd[1636]: time="2026-01-14T06:27:01.479330046Z" level=info msg="connecting to shim e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b" address="unix:///run/containerd/s/78a0b0205ccdd775d1fc7b7f412210c39e9428eae9f4511abe68e13a88291a43" protocol=ttrpc version=3 Jan 14 06:27:01.516807 systemd[1]: Started cri-containerd-e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b.scope - libcontainer container e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b. Jan 14 06:27:01.520490 kubelet[2961]: E0114 06:27:01.520241 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.520490 kubelet[2961]: W0114 06:27:01.520274 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.520490 kubelet[2961]: E0114 06:27:01.520331 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.521624 kubelet[2961]: E0114 06:27:01.521103 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.521624 kubelet[2961]: W0114 06:27:01.521141 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.521624 kubelet[2961]: E0114 06:27:01.521157 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.522176 kubelet[2961]: E0114 06:27:01.522112 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.522176 kubelet[2961]: W0114 06:27:01.522131 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.522707 kubelet[2961]: E0114 06:27:01.522146 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.523510 kubelet[2961]: E0114 06:27:01.523482 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.523693 kubelet[2961]: W0114 06:27:01.523623 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.523693 kubelet[2961]: E0114 06:27:01.523649 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.524599 kubelet[2961]: E0114 06:27:01.524309 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.524599 kubelet[2961]: W0114 06:27:01.524327 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.524599 kubelet[2961]: E0114 06:27:01.524342 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.525095 kubelet[2961]: E0114 06:27:01.525001 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.525095 kubelet[2961]: W0114 06:27:01.525018 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.525095 kubelet[2961]: E0114 06:27:01.525033 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.526173 kubelet[2961]: E0114 06:27:01.525936 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.526173 kubelet[2961]: W0114 06:27:01.526073 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.526173 kubelet[2961]: E0114 06:27:01.526091 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.526910 kubelet[2961]: E0114 06:27:01.526891 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.527091 kubelet[2961]: W0114 06:27:01.527021 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.527091 kubelet[2961]: E0114 06:27:01.527044 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.527858 kubelet[2961]: E0114 06:27:01.527673 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.527858 kubelet[2961]: W0114 06:27:01.527689 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.527858 kubelet[2961]: E0114 06:27:01.527703 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.528208 kubelet[2961]: E0114 06:27:01.528161 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.528398 kubelet[2961]: W0114 06:27:01.528307 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.528398 kubelet[2961]: E0114 06:27:01.528329 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.529160 kubelet[2961]: E0114 06:27:01.529069 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.529160 kubelet[2961]: W0114 06:27:01.529087 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.529160 kubelet[2961]: E0114 06:27:01.529103 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.529735 kubelet[2961]: E0114 06:27:01.529634 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.529735 kubelet[2961]: W0114 06:27:01.529652 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.529735 kubelet[2961]: E0114 06:27:01.529666 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.530235 kubelet[2961]: E0114 06:27:01.530140 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.530235 kubelet[2961]: W0114 06:27:01.530159 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.530235 kubelet[2961]: E0114 06:27:01.530176 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.530741 kubelet[2961]: E0114 06:27:01.530650 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.530741 kubelet[2961]: W0114 06:27:01.530666 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.530741 kubelet[2961]: E0114 06:27:01.530680 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.531272 kubelet[2961]: E0114 06:27:01.531156 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.531272 kubelet[2961]: W0114 06:27:01.531172 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.531272 kubelet[2961]: E0114 06:27:01.531187 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.560015 kubelet[2961]: E0114 06:27:01.558963 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.560015 kubelet[2961]: W0114 06:27:01.559025 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.560348 kubelet[2961]: E0114 06:27:01.559048 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.562508 kubelet[2961]: E0114 06:27:01.562433 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.563269 kubelet[2961]: W0114 06:27:01.563101 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.563716 kubelet[2961]: E0114 06:27:01.563483 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.565944 kubelet[2961]: E0114 06:27:01.565898 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.565944 kubelet[2961]: W0114 06:27:01.565921 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.565944 kubelet[2961]: E0114 06:27:01.565938 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.566298 kubelet[2961]: E0114 06:27:01.566276 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.566298 kubelet[2961]: W0114 06:27:01.566297 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.566502 kubelet[2961]: E0114 06:27:01.566320 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.566605 kubelet[2961]: E0114 06:27:01.566584 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.566605 kubelet[2961]: W0114 06:27:01.566598 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.566866 kubelet[2961]: E0114 06:27:01.566613 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.566963 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.567528 kubelet[2961]: W0114 06:27:01.566976 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.566990 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.567233 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.567528 kubelet[2961]: W0114 06:27:01.567246 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.567268 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.567504 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.567528 kubelet[2961]: W0114 06:27:01.567517 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.567528 kubelet[2961]: E0114 06:27:01.567531 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.568921 kubelet[2961]: E0114 06:27:01.568899 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.568921 kubelet[2961]: W0114 06:27:01.568917 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.569204 kubelet[2961]: E0114 06:27:01.568933 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.569648 kubelet[2961]: E0114 06:27:01.569625 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.569648 kubelet[2961]: W0114 06:27:01.569647 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.570121 kubelet[2961]: E0114 06:27:01.569663 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.570121 kubelet[2961]: E0114 06:27:01.569929 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.570121 kubelet[2961]: W0114 06:27:01.569944 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.570121 kubelet[2961]: E0114 06:27:01.569958 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.570707 kubelet[2961]: E0114 06:27:01.570297 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.570707 kubelet[2961]: W0114 06:27:01.570311 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.570707 kubelet[2961]: E0114 06:27:01.570333 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.571281 kubelet[2961]: E0114 06:27:01.570811 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.571281 kubelet[2961]: W0114 06:27:01.570825 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.571281 kubelet[2961]: E0114 06:27:01.570983 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.571771 kubelet[2961]: E0114 06:27:01.571322 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.571771 kubelet[2961]: W0114 06:27:01.571335 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.571771 kubelet[2961]: E0114 06:27:01.571349 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.571961 kubelet[2961]: E0114 06:27:01.571888 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.571961 kubelet[2961]: W0114 06:27:01.571902 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.571961 kubelet[2961]: E0114 06:27:01.571916 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.573173 kubelet[2961]: E0114 06:27:01.572275 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.573173 kubelet[2961]: W0114 06:27:01.572294 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.573173 kubelet[2961]: E0114 06:27:01.572310 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.573173 kubelet[2961]: E0114 06:27:01.572874 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.573173 kubelet[2961]: W0114 06:27:01.572998 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.573173 kubelet[2961]: E0114 06:27:01.573016 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.574341 kubelet[2961]: E0114 06:27:01.574116 2961 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 06:27:01.574341 kubelet[2961]: W0114 06:27:01.574135 2961 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 06:27:01.574341 kubelet[2961]: E0114 06:27:01.574152 2961 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 06:27:01.601000 audit: BPF prog-id=170 op=LOAD Jan 14 06:27:01.601000 audit[3665]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:01.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538646530373064363738333265623032633165356535363132346434 Jan 14 06:27:01.601000 audit: BPF prog-id=171 op=LOAD Jan 14 06:27:01.601000 audit[3665]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:01.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538646530373064363738333265623032633165356535363132346434 Jan 14 06:27:01.601000 audit: BPF prog-id=171 op=UNLOAD Jan 14 06:27:01.601000 audit[3665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:01.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538646530373064363738333265623032633165356535363132346434 Jan 14 06:27:01.601000 audit: BPF prog-id=170 op=UNLOAD Jan 14 06:27:01.601000 audit[3665]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:01.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538646530373064363738333265623032633165356535363132346434 Jan 14 06:27:01.601000 audit: BPF prog-id=172 op=LOAD Jan 14 06:27:01.601000 audit[3665]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3527 pid=3665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:01.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6538646530373064363738333265623032633165356535363132346434 Jan 14 06:27:01.635002 containerd[1636]: time="2026-01-14T06:27:01.634940047Z" level=info msg="StartContainer for \"e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b\" returns successfully" Jan 14 06:27:01.651923 systemd[1]: cri-containerd-e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b.scope: Deactivated successfully. Jan 14 06:27:01.654000 audit: BPF prog-id=172 op=UNLOAD Jan 14 06:27:01.685258 containerd[1636]: time="2026-01-14T06:27:01.685069221Z" level=info msg="received container exit event container_id:\"e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b\" id:\"e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b\" pid:3681 exited_at:{seconds:1768372021 nanos:659159400}" Jan 14 06:27:01.725029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8de070d67832eb02c1e5e56124d4d2e6ea1a2f5d3d2478892023b41d603f71b-rootfs.mount: Deactivated successfully. Jan 14 06:27:02.455897 containerd[1636]: time="2026-01-14T06:27:02.455755036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 06:27:02.534324 kubelet[2961]: I0114 06:27:02.533519 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75cddc6d9-gt252" podStartSLOduration=4.998363063 podStartE2EDuration="8.533501089s" podCreationTimestamp="2026-01-14 06:26:54 +0000 UTC" firstStartedPulling="2026-01-14 06:26:55.834725226 +0000 UTC m=+25.820484756" lastFinishedPulling="2026-01-14 06:26:59.369863247 +0000 UTC m=+29.355622782" observedRunningTime="2026-01-14 06:27:00.46534271 +0000 UTC m=+30.451102249" watchObservedRunningTime="2026-01-14 06:27:02.533501089 +0000 UTC m=+32.519260629" Jan 14 06:27:03.282371 kubelet[2961]: E0114 06:27:03.282312 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:05.284704 kubelet[2961]: E0114 06:27:05.283736 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:07.283632 kubelet[2961]: E0114 06:27:07.283507 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:09.283261 kubelet[2961]: E0114 06:27:09.283210 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:11.283098 kubelet[2961]: E0114 06:27:11.283036 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:12.398915 containerd[1636]: time="2026-01-14T06:27:12.398838214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:12.400752 containerd[1636]: time="2026-01-14T06:27:12.400700322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442323" Jan 14 06:27:12.401462 containerd[1636]: time="2026-01-14T06:27:12.401396804Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:12.403955 containerd[1636]: time="2026-01-14T06:27:12.403897663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:12.405092 containerd[1636]: time="2026-01-14T06:27:12.405054614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 9.949228555s" Jan 14 06:27:12.405175 containerd[1636]: time="2026-01-14T06:27:12.405094087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 14 06:27:12.410595 containerd[1636]: time="2026-01-14T06:27:12.410442477Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 06:27:12.423998 containerd[1636]: time="2026-01-14T06:27:12.423248605Z" level=info msg="Container 31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:27:12.436168 containerd[1636]: time="2026-01-14T06:27:12.436087648Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186\"" Jan 14 06:27:12.437610 containerd[1636]: time="2026-01-14T06:27:12.436906595Z" level=info msg="StartContainer for \"31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186\"" Jan 14 06:27:12.438888 containerd[1636]: time="2026-01-14T06:27:12.438857615Z" level=info msg="connecting to shim 31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186" address="unix:///run/containerd/s/78a0b0205ccdd775d1fc7b7f412210c39e9428eae9f4511abe68e13a88291a43" protocol=ttrpc version=3 Jan 14 06:27:12.492839 systemd[1]: Started cri-containerd-31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186.scope - libcontainer container 31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186. Jan 14 06:27:12.581000 audit: BPF prog-id=173 op=LOAD Jan 14 06:27:12.584825 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 06:27:12.584937 kernel: audit: type=1334 audit(1768372032.581:577): prog-id=173 op=LOAD Jan 14 06:27:12.581000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.588605 kernel: audit: type=1300 audit(1768372032.581:577): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.594037 kernel: audit: type=1327 audit(1768372032.581:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.581000 audit: BPF prog-id=174 op=LOAD Jan 14 06:27:12.597919 kernel: audit: type=1334 audit(1768372032.581:578): prog-id=174 op=LOAD Jan 14 06:27:12.581000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.600693 kernel: audit: type=1300 audit(1768372032.581:578): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.606168 kernel: audit: type=1327 audit(1768372032.581:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.581000 audit: BPF prog-id=174 op=UNLOAD Jan 14 06:27:12.610064 kernel: audit: type=1334 audit(1768372032.581:579): prog-id=174 op=UNLOAD Jan 14 06:27:12.581000 audit[3760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.618959 kernel: audit: type=1300 audit(1768372032.581:579): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.619043 kernel: audit: type=1327 audit(1768372032.581:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.581000 audit: BPF prog-id=173 op=UNLOAD Jan 14 06:27:12.629590 kernel: audit: type=1334 audit(1768372032.581:580): prog-id=173 op=UNLOAD Jan 14 06:27:12.581000 audit[3760]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.581000 audit: BPF prog-id=175 op=LOAD Jan 14 06:27:12.581000 audit[3760]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=3527 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:12.581000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331633036336233666361303565323239343266306239333834383833 Jan 14 06:27:12.656039 containerd[1636]: time="2026-01-14T06:27:12.655894030Z" level=info msg="StartContainer for \"31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186\" returns successfully" Jan 14 06:27:13.320755 kubelet[2961]: E0114 06:27:13.320646 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:13.628000 audit: BPF prog-id=175 op=UNLOAD Jan 14 06:27:13.624330 systemd[1]: cri-containerd-31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186.scope: Deactivated successfully. Jan 14 06:27:13.648140 containerd[1636]: time="2026-01-14T06:27:13.632182528Z" level=info msg="received container exit event container_id:\"31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186\" id:\"31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186\" pid:3773 exited_at:{seconds:1768372033 nanos:629063401}" Jan 14 06:27:13.626342 systemd[1]: cri-containerd-31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186.scope: Consumed 725ms CPU time, 158.6M memory peak, 5.3M read from disk, 171.3M written to disk. Jan 14 06:27:13.752208 kubelet[2961]: I0114 06:27:13.751972 2961 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 06:27:13.792520 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31c063b3fca05e22942f0b9384883933ec180faab1b47d567016f00b0be5d186-rootfs.mount: Deactivated successfully. Jan 14 06:27:13.865529 systemd[1]: Created slice kubepods-besteffort-pode59a89b9_4020_44eb_8f82_b847f03cedae.slice - libcontainer container kubepods-besteffort-pode59a89b9_4020_44eb_8f82_b847f03cedae.slice. Jan 14 06:27:13.884196 systemd[1]: Created slice kubepods-besteffort-podf384ec84_0cc5_4b1d_8775_b09d258a1347.slice - libcontainer container kubepods-besteffort-podf384ec84_0cc5_4b1d_8775_b09d258a1347.slice. Jan 14 06:27:13.897938 systemd[1]: Created slice kubepods-besteffort-pode3bb8bbd_f33f_49cb_94d5_84718a161600.slice - libcontainer container kubepods-besteffort-pode3bb8bbd_f33f_49cb_94d5_84718a161600.slice. Jan 14 06:27:13.911398 systemd[1]: Created slice kubepods-burstable-podad3b96aa_084d_4569_8a6a_059f7da03c00.slice - libcontainer container kubepods-burstable-podad3b96aa_084d_4569_8a6a_059f7da03c00.slice. Jan 14 06:27:13.927009 systemd[1]: Created slice kubepods-besteffort-podc58c893f_2e4d_4df6_aa40_06b84b7b6bbc.slice - libcontainer container kubepods-besteffort-podc58c893f_2e4d_4df6_aa40_06b84b7b6bbc.slice. Jan 14 06:27:13.938793 systemd[1]: Created slice kubepods-besteffort-podb2f6e747_eff5_4e8e_b242_bf44361cfc2b.slice - libcontainer container kubepods-besteffort-podb2f6e747_eff5_4e8e_b242_bf44361cfc2b.slice. Jan 14 06:27:13.947742 systemd[1]: Created slice kubepods-burstable-poda2db9329_e716_4806_a33b_bf27ebb68125.slice - libcontainer container kubepods-burstable-poda2db9329_e716_4806_a33b_bf27ebb68125.slice. Jan 14 06:27:13.963901 kubelet[2961]: I0114 06:27:13.963775 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c58c893f-2e4d-4df6-aa40-06b84b7b6bbc-goldmane-ca-bundle\") pod \"goldmane-666569f655-brmbk\" (UID: \"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc\") " pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:13.964101 kubelet[2961]: I0114 06:27:13.963938 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-ca-bundle\") pod \"whisker-6c7d44c5fd-djch8\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:13.964101 kubelet[2961]: I0114 06:27:13.963972 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvh7\" (UniqueName: \"kubernetes.io/projected/f384ec84-0cc5-4b1d-8775-b09d258a1347-kube-api-access-zdvh7\") pod \"whisker-6c7d44c5fd-djch8\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:13.964101 kubelet[2961]: I0114 06:27:13.964013 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3bb8bbd-f33f-49cb-94d5-84718a161600-calico-apiserver-certs\") pod \"calico-apiserver-545d979dcd-spmtb\" (UID: \"e3bb8bbd-f33f-49cb-94d5-84718a161600\") " pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" Jan 14 06:27:13.964101 kubelet[2961]: I0114 06:27:13.964051 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b2f6e747-eff5-4e8e-b242-bf44361cfc2b-calico-apiserver-certs\") pod \"calico-apiserver-545d979dcd-jdf9d\" (UID: \"b2f6e747-eff5-4e8e-b242-bf44361cfc2b\") " pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:13.964101 kubelet[2961]: I0114 06:27:13.964080 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad3b96aa-084d-4569-8a6a-059f7da03c00-config-volume\") pod \"coredns-674b8bbfcf-dj98n\" (UID: \"ad3b96aa-084d-4569-8a6a-059f7da03c00\") " pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:13.964338 kubelet[2961]: I0114 06:27:13.964105 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4s59\" (UniqueName: \"kubernetes.io/projected/ad3b96aa-084d-4569-8a6a-059f7da03c00-kube-api-access-x4s59\") pod \"coredns-674b8bbfcf-dj98n\" (UID: \"ad3b96aa-084d-4569-8a6a-059f7da03c00\") " pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:13.964338 kubelet[2961]: I0114 06:27:13.964135 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bt9h\" (UniqueName: \"kubernetes.io/projected/c58c893f-2e4d-4df6-aa40-06b84b7b6bbc-kube-api-access-6bt9h\") pod \"goldmane-666569f655-brmbk\" (UID: \"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc\") " pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:13.964338 kubelet[2961]: I0114 06:27:13.964160 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9nq\" (UniqueName: \"kubernetes.io/projected/b2f6e747-eff5-4e8e-b242-bf44361cfc2b-kube-api-access-lv9nq\") pod \"calico-apiserver-545d979dcd-jdf9d\" (UID: \"b2f6e747-eff5-4e8e-b242-bf44361cfc2b\") " pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:13.964338 kubelet[2961]: I0114 06:27:13.964194 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2db9329-e716-4806-a33b-bf27ebb68125-config-volume\") pod \"coredns-674b8bbfcf-r5qc2\" (UID: \"a2db9329-e716-4806-a33b-bf27ebb68125\") " pod="kube-system/coredns-674b8bbfcf-r5qc2" Jan 14 06:27:13.964338 kubelet[2961]: I0114 06:27:13.964227 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e59a89b9-4020-44eb-8f82-b847f03cedae-tigera-ca-bundle\") pod \"calico-kube-controllers-549d4b77bd-jwpts\" (UID: \"e59a89b9-4020-44eb-8f82-b847f03cedae\") " pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" Jan 14 06:27:13.965091 kubelet[2961]: I0114 06:27:13.964278 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-backend-key-pair\") pod \"whisker-6c7d44c5fd-djch8\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:13.965091 kubelet[2961]: I0114 06:27:13.964325 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c58c893f-2e4d-4df6-aa40-06b84b7b6bbc-config\") pod \"goldmane-666569f655-brmbk\" (UID: \"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc\") " pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:13.965091 kubelet[2961]: I0114 06:27:13.964364 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c58c893f-2e4d-4df6-aa40-06b84b7b6bbc-goldmane-key-pair\") pod \"goldmane-666569f655-brmbk\" (UID: \"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc\") " pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:13.965091 kubelet[2961]: I0114 06:27:13.964407 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t62h\" (UniqueName: \"kubernetes.io/projected/e59a89b9-4020-44eb-8f82-b847f03cedae-kube-api-access-2t62h\") pod \"calico-kube-controllers-549d4b77bd-jwpts\" (UID: \"e59a89b9-4020-44eb-8f82-b847f03cedae\") " pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" Jan 14 06:27:13.965091 kubelet[2961]: I0114 06:27:13.964436 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkclb\" (UniqueName: \"kubernetes.io/projected/e3bb8bbd-f33f-49cb-94d5-84718a161600-kube-api-access-bkclb\") pod \"calico-apiserver-545d979dcd-spmtb\" (UID: \"e3bb8bbd-f33f-49cb-94d5-84718a161600\") " pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" Jan 14 06:27:13.965304 kubelet[2961]: I0114 06:27:13.964462 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js7xq\" (UniqueName: \"kubernetes.io/projected/a2db9329-e716-4806-a33b-bf27ebb68125-kube-api-access-js7xq\") pod \"coredns-674b8bbfcf-r5qc2\" (UID: \"a2db9329-e716-4806-a33b-bf27ebb68125\") " pod="kube-system/coredns-674b8bbfcf-r5qc2" Jan 14 06:27:14.181186 containerd[1636]: time="2026-01-14T06:27:14.181049475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d4b77bd-jwpts,Uid:e59a89b9-4020-44eb-8f82-b847f03cedae,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:14.193254 containerd[1636]: time="2026-01-14T06:27:14.193041641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7d44c5fd-djch8,Uid:f384ec84-0cc5-4b1d-8775-b09d258a1347,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:14.210934 containerd[1636]: time="2026-01-14T06:27:14.210877290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-spmtb,Uid:e3bb8bbd-f33f-49cb-94d5-84718a161600,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:27:14.221252 containerd[1636]: time="2026-01-14T06:27:14.221171236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,}" Jan 14 06:27:14.242178 containerd[1636]: time="2026-01-14T06:27:14.240835910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:14.247226 containerd[1636]: time="2026-01-14T06:27:14.247181088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:27:14.257071 containerd[1636]: time="2026-01-14T06:27:14.257020443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5qc2,Uid:a2db9329-e716-4806-a33b-bf27ebb68125,Namespace:kube-system,Attempt:0,}" Jan 14 06:27:14.547027 containerd[1636]: time="2026-01-14T06:27:14.546717736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 06:27:14.619698 containerd[1636]: time="2026-01-14T06:27:14.619614852Z" level=error msg="Failed to destroy network for sandbox \"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.620049 containerd[1636]: time="2026-01-14T06:27:14.619819842Z" level=error msg="Failed to destroy network for sandbox \"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.632511 containerd[1636]: time="2026-01-14T06:27:14.622705653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.632939 containerd[1636]: time="2026-01-14T06:27:14.631556872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.647806 kubelet[2961]: E0114 06:27:14.647728 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.650875 kubelet[2961]: E0114 06:27:14.647727 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.650875 kubelet[2961]: E0114 06:27:14.647882 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:14.650875 kubelet[2961]: E0114 06:27:14.647841 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:14.650875 kubelet[2961]: E0114 06:27:14.647944 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:14.651665 containerd[1636]: time="2026-01-14T06:27:14.648116631Z" level=error msg="Failed to destroy network for sandbox \"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.651665 containerd[1636]: time="2026-01-14T06:27:14.651102372Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7d44c5fd-djch8,Uid:f384ec84-0cc5-4b1d-8775-b09d258a1347,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.652124 kubelet[2961]: E0114 06:27:14.647952 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:14.652124 kubelet[2961]: E0114 06:27:14.648032 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c164dcb7baef997e0dc799120fb87ed63895a5f870286d72fac81348c17c7ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:27:14.652124 kubelet[2961]: E0114 06:27:14.648043 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a4460546fc50d2e440beacfaadc67ae884f48511cc947ba8c5819df12900abc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:27:14.652341 kubelet[2961]: E0114 06:27:14.651420 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.652341 kubelet[2961]: E0114 06:27:14.651459 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:14.652341 kubelet[2961]: E0114 06:27:14.651482 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:14.653605 kubelet[2961]: E0114 06:27:14.651541 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c7d44c5fd-djch8_calico-system(f384ec84-0cc5-4b1d-8775-b09d258a1347)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c7d44c5fd-djch8_calico-system(f384ec84-0cc5-4b1d-8775-b09d258a1347)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91f92a6104176474f983eec119799d5b1f8041012a21dd1c754bed66a2076f43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c7d44c5fd-djch8" podUID="f384ec84-0cc5-4b1d-8775-b09d258a1347" Jan 14 06:27:14.678820 containerd[1636]: time="2026-01-14T06:27:14.678749008Z" level=error msg="Failed to destroy network for sandbox \"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.679328 containerd[1636]: time="2026-01-14T06:27:14.678749008Z" level=error msg="Failed to destroy network for sandbox \"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.681314 containerd[1636]: time="2026-01-14T06:27:14.681260818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5qc2,Uid:a2db9329-e716-4806-a33b-bf27ebb68125,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.682096 kubelet[2961]: E0114 06:27:14.681801 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.682096 kubelet[2961]: E0114 06:27:14.681862 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r5qc2" Jan 14 06:27:14.682096 kubelet[2961]: E0114 06:27:14.681892 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-r5qc2" Jan 14 06:27:14.682322 kubelet[2961]: E0114 06:27:14.681964 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-r5qc2_kube-system(a2db9329-e716-4806-a33b-bf27ebb68125)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-r5qc2_kube-system(a2db9329-e716-4806-a33b-bf27ebb68125)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef6b3f048f1cac1a80d1d60af84b9decb189efe7b59667c814c41cdda165fdfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-r5qc2" podUID="a2db9329-e716-4806-a33b-bf27ebb68125" Jan 14 06:27:14.682785 containerd[1636]: time="2026-01-14T06:27:14.682678667Z" level=error msg="Failed to destroy network for sandbox \"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.683080 containerd[1636]: time="2026-01-14T06:27:14.683039580Z" level=error msg="Failed to destroy network for sandbox \"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.683424 containerd[1636]: time="2026-01-14T06:27:14.683373531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-spmtb,Uid:e3bb8bbd-f33f-49cb-94d5-84718a161600,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.684321 kubelet[2961]: E0114 06:27:14.684250 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.684321 kubelet[2961]: E0114 06:27:14.684302 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" Jan 14 06:27:14.685051 kubelet[2961]: E0114 06:27:14.684330 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" Jan 14 06:27:14.685051 kubelet[2961]: E0114 06:27:14.684394 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f072738e1a95d1b33615b4b8b882c17c572fc30712fcb49c67b090c3974fa6f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:27:14.686082 containerd[1636]: time="2026-01-14T06:27:14.686045802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d4b77bd-jwpts,Uid:e59a89b9-4020-44eb-8f82-b847f03cedae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.688240 kubelet[2961]: E0114 06:27:14.687958 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.688409 kubelet[2961]: E0114 06:27:14.688329 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" Jan 14 06:27:14.688604 kubelet[2961]: E0114 06:27:14.688502 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" Jan 14 06:27:14.688859 kubelet[2961]: E0114 06:27:14.688738 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f58e9ef502ad6ac89cf9c9e1079bd442f643f62cba75ced65e42b5c087aefd2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:27:14.689032 containerd[1636]: time="2026-01-14T06:27:14.688911854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.689327 kubelet[2961]: E0114 06:27:14.689294 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:14.689396 kubelet[2961]: E0114 06:27:14.689338 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:14.689396 kubelet[2961]: E0114 06:27:14.689361 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:14.689514 kubelet[2961]: E0114 06:27:14.689402 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dj98n_kube-system(ad3b96aa-084d-4569-8a6a-059f7da03c00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dj98n_kube-system(ad3b96aa-084d-4569-8a6a-059f7da03c00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b1ed2899d8504842808a5c7f599682fddcbd95b6485a0bf0ae90e421f0c5c5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dj98n" podUID="ad3b96aa-084d-4569-8a6a-059f7da03c00" Jan 14 06:27:15.290717 systemd[1]: Created slice kubepods-besteffort-poda5a91150_6e37_4bc7_abb4_c895c0d189ea.slice - libcontainer container kubepods-besteffort-poda5a91150_6e37_4bc7_abb4_c895c0d189ea.slice. Jan 14 06:27:15.295756 containerd[1636]: time="2026-01-14T06:27:15.295530520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7g86f,Uid:a5a91150-6e37-4bc7-abb4-c895c0d189ea,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:15.385206 containerd[1636]: time="2026-01-14T06:27:15.385076834Z" level=error msg="Failed to destroy network for sandbox \"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:15.387963 systemd[1]: run-netns-cni\x2d470bab40\x2d2967\x2db5b9\x2d74db\x2d03ba8cf4901f.mount: Deactivated successfully. Jan 14 06:27:15.389756 containerd[1636]: time="2026-01-14T06:27:15.389613589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7g86f,Uid:a5a91150-6e37-4bc7-abb4-c895c0d189ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:15.390340 kubelet[2961]: E0114 06:27:15.390289 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:15.390527 kubelet[2961]: E0114 06:27:15.390471 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7g86f" Jan 14 06:27:15.390770 kubelet[2961]: E0114 06:27:15.390647 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7g86f" Jan 14 06:27:15.391013 kubelet[2961]: E0114 06:27:15.390904 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a6a307e14d95c8811559adb87d32b2517fbaa3f5e24e756de32eb6da8b75dbd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:18.296053 kubelet[2961]: I0114 06:27:18.295968 2961 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 06:27:18.411000 audit[4020]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:18.415750 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 06:27:18.415855 kernel: audit: type=1325 audit(1768372038.411:583): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:18.411000 audit[4020]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc52e30bc0 a2=0 a3=7ffc52e30bac items=0 ppid=3077 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:18.428652 kernel: audit: type=1300 audit(1768372038.411:583): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffc52e30bc0 a2=0 a3=7ffc52e30bac items=0 ppid=3077 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:18.411000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:18.433651 kernel: audit: type=1327 audit(1768372038.411:583): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:18.423000 audit[4020]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:18.439600 kernel: audit: type=1325 audit(1768372038.423:584): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4020 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:18.423000 audit[4020]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc52e30bc0 a2=0 a3=7ffc52e30bac items=0 ppid=3077 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:18.446611 kernel: audit: type=1300 audit(1768372038.423:584): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffc52e30bc0 a2=0 a3=7ffc52e30bac items=0 ppid=3077 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:18.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:18.452662 kernel: audit: type=1327 audit(1768372038.423:584): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:20.508040 systemd[1]: Started sshd@10-10.230.48.98:22-64.225.73.213:58692.service - OpenSSH per-connection server daemon (64.225.73.213:58692). Jan 14 06:27:20.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.48.98:22-64.225.73.213:58692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:20.517673 kernel: audit: type=1130 audit(1768372040.507:585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.48.98:22-64.225.73.213:58692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:20.760292 sshd[4026]: Invalid user mysql from 64.225.73.213 port 58692 Jan 14 06:27:20.788852 sshd[4026]: Connection closed by invalid user mysql 64.225.73.213 port 58692 [preauth] Jan 14 06:27:20.789000 audit[4026]: USER_ERR pid=4026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:27:20.799670 kernel: audit: type=1109 audit(1768372040.789:586): pid=4026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:27:20.797778 systemd[1]: sshd@10-10.230.48.98:22-64.225.73.213:58692.service: Deactivated successfully. Jan 14 06:27:20.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.48.98:22-64.225.73.213:58692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:20.809610 kernel: audit: type=1131 audit(1768372040.797:587): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.48.98:22-64.225.73.213:58692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:25.284859 containerd[1636]: time="2026-01-14T06:27:25.284768871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:27:25.469066 containerd[1636]: time="2026-01-14T06:27:25.468967841Z" level=error msg="Failed to destroy network for sandbox \"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:25.474018 systemd[1]: run-netns-cni\x2dbdb8f84b\x2dd715\x2d2571\x2dafe7\x2d707d010e01b8.mount: Deactivated successfully. Jan 14 06:27:25.476273 containerd[1636]: time="2026-01-14T06:27:25.474040606Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:25.476632 kubelet[2961]: E0114 06:27:25.476236 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:25.476632 kubelet[2961]: E0114 06:27:25.476375 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:25.476632 kubelet[2961]: E0114 06:27:25.476427 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" Jan 14 06:27:25.479262 kubelet[2961]: E0114 06:27:25.478252 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55b7e1226f3f0347d15079e2b8cbddbeb17f631f26106eadeeec2b5cd930e6b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:27:26.286594 containerd[1636]: time="2026-01-14T06:27:26.285785511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,}" Jan 14 06:27:26.455051 containerd[1636]: time="2026-01-14T06:27:26.454930101Z" level=error msg="Failed to destroy network for sandbox \"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:26.459541 systemd[1]: run-netns-cni\x2d39a7a3a3\x2d57cb\x2d5e29\x2d6ec1\x2df6cd3c5353cf.mount: Deactivated successfully. Jan 14 06:27:26.462580 containerd[1636]: time="2026-01-14T06:27:26.461710133Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:26.463450 kubelet[2961]: E0114 06:27:26.463382 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:26.463971 kubelet[2961]: E0114 06:27:26.463932 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:26.464149 kubelet[2961]: E0114 06:27:26.464112 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dj98n" Jan 14 06:27:26.464358 kubelet[2961]: E0114 06:27:26.464314 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dj98n_kube-system(ad3b96aa-084d-4569-8a6a-059f7da03c00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dj98n_kube-system(ad3b96aa-084d-4569-8a6a-059f7da03c00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0cfaaf9a45c2085a38d263f1b578598a920f2caaf3348e2abfde610f7235a60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dj98n" podUID="ad3b96aa-084d-4569-8a6a-059f7da03c00" Jan 14 06:27:27.284161 containerd[1636]: time="2026-01-14T06:27:27.284110101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:27.413684 containerd[1636]: time="2026-01-14T06:27:27.413622126Z" level=error msg="Failed to destroy network for sandbox \"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:27.417539 containerd[1636]: time="2026-01-14T06:27:27.417368256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:27.418308 systemd[1]: run-netns-cni\x2dd1561927\x2db888\x2db158\x2d3f2a\x2d178617d2184d.mount: Deactivated successfully. Jan 14 06:27:27.419717 kubelet[2961]: E0114 06:27:27.418525 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:27.419717 kubelet[2961]: E0114 06:27:27.418626 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:27.419717 kubelet[2961]: E0114 06:27:27.418660 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-brmbk" Jan 14 06:27:27.420169 kubelet[2961]: E0114 06:27:27.419183 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f3cae4b744c909275a543082411b24abe9aab7f34b0a38c914c08be8574e2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:27:27.719655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215888237.mount: Deactivated successfully. Jan 14 06:27:27.780662 containerd[1636]: time="2026-01-14T06:27:27.779600482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:27.797420 containerd[1636]: time="2026-01-14T06:27:27.797127462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Jan 14 06:27:27.821354 containerd[1636]: time="2026-01-14T06:27:27.821283056Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:27.824312 containerd[1636]: time="2026-01-14T06:27:27.824255987Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 06:27:27.825593 containerd[1636]: time="2026-01-14T06:27:27.825170828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 13.278389316s" Jan 14 06:27:27.838687 containerd[1636]: time="2026-01-14T06:27:27.838604608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 14 06:27:27.902732 containerd[1636]: time="2026-01-14T06:27:27.902635805Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 06:27:28.017614 containerd[1636]: time="2026-01-14T06:27:28.017011079Z" level=info msg="Container d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:27:28.091462 containerd[1636]: time="2026-01-14T06:27:28.091069576Z" level=info msg="CreateContainer within sandbox \"11da1f6e6570d34ec30b77106134164e31df6986090f84ec0eb489be39927408\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39\"" Jan 14 06:27:28.094720 containerd[1636]: time="2026-01-14T06:27:28.094434513Z" level=info msg="StartContainer for \"d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39\"" Jan 14 06:27:28.106162 containerd[1636]: time="2026-01-14T06:27:28.106060657Z" level=info msg="connecting to shim d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39" address="unix:///run/containerd/s/78a0b0205ccdd775d1fc7b7f412210c39e9428eae9f4511abe68e13a88291a43" protocol=ttrpc version=3 Jan 14 06:27:28.287432 containerd[1636]: time="2026-01-14T06:27:28.286886109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7d44c5fd-djch8,Uid:f384ec84-0cc5-4b1d-8775-b09d258a1347,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:28.308075 systemd[1]: Started cri-containerd-d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39.scope - libcontainer container d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39. Jan 14 06:27:28.423893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660906830.mount: Deactivated successfully. Jan 14 06:27:28.441195 containerd[1636]: time="2026-01-14T06:27:28.441107720Z" level=error msg="Failed to destroy network for sandbox \"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:28.446509 systemd[1]: run-netns-cni\x2dfa217257\x2d1d09\x2d4c86\x2d7bc0\x2ddcd8a3d0cbea.mount: Deactivated successfully. Jan 14 06:27:28.447225 containerd[1636]: time="2026-01-14T06:27:28.447166681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c7d44c5fd-djch8,Uid:f384ec84-0cc5-4b1d-8775-b09d258a1347,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:28.449016 kubelet[2961]: E0114 06:27:28.448836 2961 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 06:27:28.450425 kubelet[2961]: E0114 06:27:28.449450 2961 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:28.450425 kubelet[2961]: E0114 06:27:28.449502 2961 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c7d44c5fd-djch8" Jan 14 06:27:28.450425 kubelet[2961]: E0114 06:27:28.449659 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c7d44c5fd-djch8_calico-system(f384ec84-0cc5-4b1d-8775-b09d258a1347)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c7d44c5fd-djch8_calico-system(f384ec84-0cc5-4b1d-8775-b09d258a1347)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbbe003767129868340c286889d6da612c266267e3f328d0cc8f221bdd7cdb72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c7d44c5fd-djch8" podUID="f384ec84-0cc5-4b1d-8775-b09d258a1347" Jan 14 06:27:28.463000 audit: BPF prog-id=176 op=LOAD Jan 14 06:27:28.463000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.473254 kernel: audit: type=1334 audit(1768372048.463:588): prog-id=176 op=LOAD Jan 14 06:27:28.473641 kernel: audit: type=1300 audit(1768372048.463:588): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.463000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.478655 kernel: audit: type=1327 audit(1768372048.463:588): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.468000 audit: BPF prog-id=177 op=LOAD Jan 14 06:27:28.483396 kernel: audit: type=1334 audit(1768372048.468:589): prog-id=177 op=LOAD Jan 14 06:27:28.483463 kernel: audit: type=1300 audit(1768372048.468:589): arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.490562 kernel: audit: type=1327 audit(1768372048.468:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.468000 audit: BPF prog-id=177 op=UNLOAD Jan 14 06:27:28.495405 kernel: audit: type=1334 audit(1768372048.468:590): prog-id=177 op=UNLOAD Jan 14 06:27:28.495654 kernel: audit: type=1300 audit(1768372048.468:590): arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.513299 kernel: audit: type=1327 audit(1768372048.468:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.513811 kernel: audit: type=1334 audit(1768372048.468:591): prog-id=176 op=UNLOAD Jan 14 06:27:28.468000 audit: BPF prog-id=176 op=UNLOAD Jan 14 06:27:28.468000 audit[4111]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.468000 audit: BPF prog-id=178 op=LOAD Jan 14 06:27:28.468000 audit[4111]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=3527 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:28.468000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433326561363566366633353639653230653831366135663732623039 Jan 14 06:27:28.570703 containerd[1636]: time="2026-01-14T06:27:28.570489901Z" level=info msg="StartContainer for \"d32ea65f6f3569e20e816a5f72b09e5c233dae1a1d63cc96c2859daf28783c39\" returns successfully" Jan 14 06:27:28.885471 kubelet[2961]: I0114 06:27:28.883686 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f7fv2" podStartSLOduration=1.9033057599999998 podStartE2EDuration="33.876230536s" podCreationTimestamp="2026-01-14 06:26:55 +0000 UTC" firstStartedPulling="2026-01-14 06:26:55.867135483 +0000 UTC m=+25.852895009" lastFinishedPulling="2026-01-14 06:27:27.840060247 +0000 UTC m=+57.825819785" observedRunningTime="2026-01-14 06:27:28.81907504 +0000 UTC m=+58.804834595" watchObservedRunningTime="2026-01-14 06:27:28.876230536 +0000 UTC m=+58.861990075" Jan 14 06:27:29.089332 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 06:27:29.089620 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 06:27:29.289321 containerd[1636]: time="2026-01-14T06:27:29.288999444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d4b77bd-jwpts,Uid:e59a89b9-4020-44eb-8f82-b847f03cedae,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:29.290589 containerd[1636]: time="2026-01-14T06:27:29.288997887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-spmtb,Uid:e3bb8bbd-f33f-49cb-94d5-84718a161600,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:27:29.290589 containerd[1636]: time="2026-01-14T06:27:29.289062152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5qc2,Uid:a2db9329-e716-4806-a33b-bf27ebb68125,Namespace:kube-system,Attempt:0,}" Jan 14 06:27:29.631625 kubelet[2961]: I0114 06:27:29.630397 2961 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-backend-key-pair\") pod \"f384ec84-0cc5-4b1d-8775-b09d258a1347\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " Jan 14 06:27:29.631625 kubelet[2961]: I0114 06:27:29.630495 2961 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvh7\" (UniqueName: \"kubernetes.io/projected/f384ec84-0cc5-4b1d-8775-b09d258a1347-kube-api-access-zdvh7\") pod \"f384ec84-0cc5-4b1d-8775-b09d258a1347\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " Jan 14 06:27:29.631625 kubelet[2961]: I0114 06:27:29.630546 2961 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-ca-bundle\") pod \"f384ec84-0cc5-4b1d-8775-b09d258a1347\" (UID: \"f384ec84-0cc5-4b1d-8775-b09d258a1347\") " Jan 14 06:27:29.669604 kubelet[2961]: I0114 06:27:29.669179 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f384ec84-0cc5-4b1d-8775-b09d258a1347" (UID: "f384ec84-0cc5-4b1d-8775-b09d258a1347"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 06:27:29.679272 kubelet[2961]: I0114 06:27:29.677015 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f384ec84-0cc5-4b1d-8775-b09d258a1347-kube-api-access-zdvh7" (OuterVolumeSpecName: "kube-api-access-zdvh7") pod "f384ec84-0cc5-4b1d-8775-b09d258a1347" (UID: "f384ec84-0cc5-4b1d-8775-b09d258a1347"). InnerVolumeSpecName "kube-api-access-zdvh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 06:27:29.677216 systemd[1]: var-lib-kubelet-pods-f384ec84\x2d0cc5\x2d4b1d\x2d8775\x2db09d258a1347-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzdvh7.mount: Deactivated successfully. Jan 14 06:27:29.695144 kubelet[2961]: I0114 06:27:29.695002 2961 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f384ec84-0cc5-4b1d-8775-b09d258a1347" (UID: "f384ec84-0cc5-4b1d-8775-b09d258a1347"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 06:27:29.696492 systemd[1]: var-lib-kubelet-pods-f384ec84\x2d0cc5\x2d4b1d\x2d8775\x2db09d258a1347-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 06:27:29.738400 kubelet[2961]: I0114 06:27:29.738087 2961 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-backend-key-pair\") on node \"srv-i1yja.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:27:29.739857 kubelet[2961]: I0114 06:27:29.739127 2961 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zdvh7\" (UniqueName: \"kubernetes.io/projected/f384ec84-0cc5-4b1d-8775-b09d258a1347-kube-api-access-zdvh7\") on node \"srv-i1yja.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:27:29.739857 kubelet[2961]: I0114 06:27:29.739181 2961 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f384ec84-0cc5-4b1d-8775-b09d258a1347-whisker-ca-bundle\") on node \"srv-i1yja.gb1.brightbox.com\" DevicePath \"\"" Jan 14 06:27:29.764548 systemd[1]: Removed slice kubepods-besteffort-podf384ec84_0cc5_4b1d_8775_b09d258a1347.slice - libcontainer container kubepods-besteffort-podf384ec84_0cc5_4b1d_8775_b09d258a1347.slice. Jan 14 06:27:30.334421 kubelet[2961]: I0114 06:27:30.334365 2961 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f384ec84-0cc5-4b1d-8775-b09d258a1347" path="/var/lib/kubelet/pods/f384ec84-0cc5-4b1d-8775-b09d258a1347/volumes" Jan 14 06:27:30.362236 systemd[1]: Created slice kubepods-besteffort-poddd1b4bf9_58c2_4dd7_a38f_7c8b7d976bbd.slice - libcontainer container kubepods-besteffort-poddd1b4bf9_58c2_4dd7_a38f_7c8b7d976bbd.slice. Jan 14 06:27:30.371431 containerd[1636]: time="2026-01-14T06:27:30.371301978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7g86f,Uid:a5a91150-6e37-4bc7-abb4-c895c0d189ea,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:30.444546 kubelet[2961]: I0114 06:27:30.444389 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl67l\" (UniqueName: \"kubernetes.io/projected/dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd-kube-api-access-zl67l\") pod \"whisker-78797b75b4-rh22t\" (UID: \"dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd\") " pod="calico-system/whisker-78797b75b4-rh22t" Jan 14 06:27:30.446581 kubelet[2961]: I0114 06:27:30.445663 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd-whisker-backend-key-pair\") pod \"whisker-78797b75b4-rh22t\" (UID: \"dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd\") " pod="calico-system/whisker-78797b75b4-rh22t" Jan 14 06:27:30.446581 kubelet[2961]: I0114 06:27:30.445859 2961 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd-whisker-ca-bundle\") pod \"whisker-78797b75b4-rh22t\" (UID: \"dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd\") " pod="calico-system/whisker-78797b75b4-rh22t" Jan 14 06:27:30.492899 systemd-networkd[1557]: caliac7c9405179: Link UP Jan 14 06:27:30.501793 systemd-networkd[1557]: caliac7c9405179: Gained carrier Jan 14 06:27:30.627827 containerd[1636]: 2026-01-14 06:27:29.520 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:27:30.627827 containerd[1636]: 2026-01-14 06:27:29.569 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0 coredns-674b8bbfcf- kube-system a2db9329-e716-4806-a33b-bf27ebb68125 879 0 2026-01-14 06:26:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com coredns-674b8bbfcf-r5qc2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac7c9405179 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-" Jan 14 06:27:30.627827 containerd[1636]: 2026-01-14 06:27:29.570 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.627827 containerd[1636]: 2026-01-14 06:27:29.938 [INFO][4254] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" HandleID="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:29.941 [INFO][4254] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" HandleID="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f0b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-r5qc2", "timestamp":"2026-01-14 06:27:29.938695463 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:29.942 [INFO][4254] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:29.942 [INFO][4254] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:29.944 [INFO][4254] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:30.018 [INFO][4254] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:30.123 [INFO][4254] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:30.179 [INFO][4254] ipam/ipam.go 543: Ran out of existing affine blocks for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:30.184 [INFO][4254] ipam/ipam.go 560: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.628229 containerd[1636]: 2026-01-14 06:27:30.190 [INFO][4254] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.191 [INFO][4254] ipam/ipam.go 572: Found unclaimed block host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.191 [INFO][4254] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.204 [INFO][4254] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.204 [INFO][4254] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.211 [INFO][4254] ipam/ipam.go 163: The referenced block doesn't exist, trying to create it cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.242 [INFO][4254] ipam/ipam.go 170: Wrote affinity as pending cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.259 [INFO][4254] ipam/ipam.go 179: Attempting to claim the block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.259 [INFO][4254] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.306 [INFO][4254] ipam/ipam_block_reader_writer.go 267: Successfully created block Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.306 [INFO][4254] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.337 [INFO][4254] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.634265 containerd[1636]: 2026-01-14 06:27:30.338 [INFO][4254] ipam/ipam.go 607: Block '192.168.12.192/26' has 64 free ips which is more than 1 ips required. host="srv-i1yja.gb1.brightbox.com" subnet=192.168.12.192/26 Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.339 [INFO][4254] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.345 [INFO][4254] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.361 [INFO][4254] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.395 [INFO][4254] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.192/26] block=192.168.12.192/26 handle="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.398 [INFO][4254] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.192/26] handle="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.398 [INFO][4254] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:30.642841 containerd[1636]: 2026-01-14 06:27:30.399 [INFO][4254] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.192/26] IPv6=[] ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" HandleID="k8s-pod-network.896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.431 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a2db9329-e716-4806-a33b-bf27ebb68125", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-r5qc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac7c9405179", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.437 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.192/32] ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.439 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac7c9405179 ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.511 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.559 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a2db9329-e716-4806-a33b-bf27ebb68125", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b", Pod:"coredns-674b8bbfcf-r5qc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.192/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac7c9405179", MAC:"76:78:86:f8:8e:83", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:30.643166 containerd[1636]: 2026-01-14 06:27:30.598 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" Namespace="kube-system" Pod="coredns-674b8bbfcf-r5qc2" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--r5qc2-eth0" Jan 14 06:27:30.748336 containerd[1636]: time="2026-01-14T06:27:30.748051479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78797b75b4-rh22t,Uid:dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:30.877629 systemd-networkd[1557]: cali24ee27dbaa0: Link UP Jan 14 06:27:30.880863 systemd-networkd[1557]: cali24ee27dbaa0: Gained carrier Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.430 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.551 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0 calico-kube-controllers-549d4b77bd- calico-system e59a89b9-4020-44eb-8f82-b847f03cedae 870 0 2026-01-14 06:26:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:549d4b77bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com calico-kube-controllers-549d4b77bd-jwpts eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali24ee27dbaa0 [] [] }} ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.554 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.937 [INFO][4248] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" HandleID="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.942 [INFO][4248] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" HandleID="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"calico-kube-controllers-549d4b77bd-jwpts", "timestamp":"2026-01-14 06:27:29.937862758 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:29.942 [INFO][4248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.400 [INFO][4248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.401 [INFO][4248] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.514 [INFO][4248] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.605 [INFO][4248] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.657 [INFO][4248] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.675 [INFO][4248] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.700 [INFO][4248] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.710 [INFO][4248] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.721 [INFO][4248] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.737 [INFO][4248] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.755 [INFO][4248] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.194/26] block=192.168.12.192/26 handle="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.758 [INFO][4248] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.194/26] handle="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.758 [INFO][4248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:30.979627 containerd[1636]: 2026-01-14 06:27:30.759 [INFO][4248] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.194/26] IPv6=[] ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" HandleID="k8s-pod-network.b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.820 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0", GenerateName:"calico-kube-controllers-549d4b77bd-", Namespace:"calico-system", SelfLink:"", UID:"e59a89b9-4020-44eb-8f82-b847f03cedae", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d4b77bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-549d4b77bd-jwpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24ee27dbaa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.823 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.194/32] ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.824 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24ee27dbaa0 ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.885 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.903 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0", GenerateName:"calico-kube-controllers-549d4b77bd-", Namespace:"calico-system", SelfLink:"", UID:"e59a89b9-4020-44eb-8f82-b847f03cedae", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"549d4b77bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f", Pod:"calico-kube-controllers-549d4b77bd-jwpts", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali24ee27dbaa0", MAC:"36:97:7d:44:1b:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:30.981480 containerd[1636]: 2026-01-14 06:27:30.960 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" Namespace="calico-system" Pod="calico-kube-controllers-549d4b77bd-jwpts" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--kube--controllers--549d4b77bd--jwpts-eth0" Jan 14 06:27:31.074112 systemd-networkd[1557]: cali7552d5b920c: Link UP Jan 14 06:27:31.081542 systemd-networkd[1557]: cali7552d5b920c: Gained carrier Jan 14 06:27:31.137358 containerd[1636]: time="2026-01-14T06:27:31.137110628Z" level=info msg="connecting to shim b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f" address="unix:///run/containerd/s/0e00efdd9c33e4311d5f2c529a4400bf23276feaf5fd51638167c782eb8d1ea2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:31.144758 containerd[1636]: time="2026-01-14T06:27:31.143760183Z" level=info msg="connecting to shim 896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b" address="unix:///run/containerd/s/3eb8fced99c51563da906be40e524cdc86bef14cc206406dc621d6cbb21d4bb4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.525 [INFO][4204] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.572 [INFO][4204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0 calico-apiserver-545d979dcd- calico-apiserver e3bb8bbd-f33f-49cb-94d5-84718a161600 875 0 2026-01-14 06:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545d979dcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com calico-apiserver-545d979dcd-spmtb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7552d5b920c [] [] }} ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.572 [INFO][4204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.944 [INFO][4252] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" HandleID="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.946 [INFO][4252] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" HandleID="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000354700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i1yja.gb1.brightbox.com", "pod":"calico-apiserver-545d979dcd-spmtb", "timestamp":"2026-01-14 06:27:29.944493618 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:29.946 [INFO][4252] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.759 [INFO][4252] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.759 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.833 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.950 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.973 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.982 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.987 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.988 [INFO][4252] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:30.995 [INFO][4252] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560 Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:31.027 [INFO][4252] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:31.049 [INFO][4252] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.195/26] block=192.168.12.192/26 handle="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:31.049 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.195/26] handle="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:31.050 [INFO][4252] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:31.148841 containerd[1636]: 2026-01-14 06:27:31.050 [INFO][4252] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.195/26] IPv6=[] ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" HandleID="k8s-pod-network.888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.063 [INFO][4204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0", GenerateName:"calico-apiserver-545d979dcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3bb8bbd-f33f-49cb-94d5-84718a161600", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545d979dcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-545d979dcd-spmtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7552d5b920c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.063 [INFO][4204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.195/32] ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.063 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7552d5b920c ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.082 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.084 [INFO][4204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0", GenerateName:"calico-apiserver-545d979dcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3bb8bbd-f33f-49cb-94d5-84718a161600", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545d979dcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560", Pod:"calico-apiserver-545d979dcd-spmtb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7552d5b920c", MAC:"32:06:3b:29:27:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.150633 containerd[1636]: 2026-01-14 06:27:31.133 [INFO][4204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-spmtb" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--spmtb-eth0" Jan 14 06:27:31.297900 systemd[1]: Started cri-containerd-896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b.scope - libcontainer container 896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b. Jan 14 06:27:31.329954 systemd[1]: Started cri-containerd-b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f.scope - libcontainer container b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f. Jan 14 06:27:31.361718 systemd-networkd[1557]: cali13af0846417: Link UP Jan 14 06:27:31.363707 systemd-networkd[1557]: cali13af0846417: Gained carrier Jan 14 06:27:31.368000 audit: BPF prog-id=179 op=LOAD Jan 14 06:27:31.371000 audit: BPF prog-id=180 op=LOAD Jan 14 06:27:31.371000 audit[4429]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d2238 a2=98 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.371000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.374000 audit: BPF prog-id=180 op=UNLOAD Jan 14 06:27:31.374000 audit[4429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.376000 audit: BPF prog-id=181 op=LOAD Jan 14 06:27:31.376000 audit[4429]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d2488 a2=98 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.376000 audit: BPF prog-id=182 op=LOAD Jan 14 06:27:31.376000 audit[4429]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001d2218 a2=98 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.376000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.378000 audit: BPF prog-id=182 op=UNLOAD Jan 14 06:27:31.378000 audit[4429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.378000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.379000 audit: BPF prog-id=181 op=UNLOAD Jan 14 06:27:31.379000 audit[4429]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.381000 audit: BPF prog-id=183 op=LOAD Jan 14 06:27:31.381000 audit[4429]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001d26e8 a2=98 a3=0 items=0 ppid=4400 pid=4429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.381000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839366333396133613035613663613661653830343739346363313230 Jan 14 06:27:31.390720 containerd[1636]: time="2026-01-14T06:27:31.390082476Z" level=info msg="connecting to shim 888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560" address="unix:///run/containerd/s/4b5d86dd2ffffc59244327f62311405e9658728e59f93e3e45abf2f9b3ec98ea" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:30.588 [INFO][4304] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:30.744 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0 csi-node-driver- calico-system a5a91150-6e37-4bc7-abb4-c895c0d189ea 767 0 2026-01-14 06:26:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com csi-node-driver-7g86f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali13af0846417 [] [] }} ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:30.744 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.026 [INFO][4353] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" HandleID="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Workload="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.026 [INFO][4353] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" HandleID="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Workload="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000222770), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"csi-node-driver-7g86f", "timestamp":"2026-01-14 06:27:31.026009875 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.031 [INFO][4353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.050 [INFO][4353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.051 [INFO][4353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.140 [INFO][4353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.208 [INFO][4353] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.228 [INFO][4353] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.236 [INFO][4353] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.246 [INFO][4353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.246 [INFO][4353] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.258 [INFO][4353] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592 Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.272 [INFO][4353] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.296 [INFO][4353] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.196/26] block=192.168.12.192/26 handle="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.296 [INFO][4353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.196/26] handle="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.296 [INFO][4353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:31.416254 containerd[1636]: 2026-01-14 06:27:31.296 [INFO][4353] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.196/26] IPv6=[] ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" HandleID="k8s-pod-network.310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Workload="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.308 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5a91150-6e37-4bc7-abb4-c895c0d189ea", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-7g86f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13af0846417", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.313 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.196/32] ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.313 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13af0846417 ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.365 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.369 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a5a91150-6e37-4bc7-abb4-c895c0d189ea", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592", Pod:"csi-node-driver-7g86f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali13af0846417", MAC:"b2:1a:b1:97:06:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.417460 containerd[1636]: 2026-01-14 06:27:31.409 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" Namespace="calico-system" Pod="csi-node-driver-7g86f" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-csi--node--driver--7g86f-eth0" Jan 14 06:27:31.437000 audit: BPF prog-id=184 op=LOAD Jan 14 06:27:31.439000 audit: BPF prog-id=185 op=LOAD Jan 14 06:27:31.439000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4238 a2=98 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.439000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.440000 audit: BPF prog-id=185 op=UNLOAD Jan 14 06:27:31.440000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.440000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.442000 audit: BPF prog-id=186 op=LOAD Jan 14 06:27:31.442000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f4488 a2=98 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.443000 audit: BPF prog-id=187 op=LOAD Jan 14 06:27:31.443000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001f4218 a2=98 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.443000 audit: BPF prog-id=187 op=UNLOAD Jan 14 06:27:31.443000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.443000 audit: BPF prog-id=186 op=UNLOAD Jan 14 06:27:31.443000 audit[4441]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.443000 audit: BPF prog-id=188 op=LOAD Jan 14 06:27:31.443000 audit[4441]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001f46e8 a2=98 a3=0 items=0 ppid=4402 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236313162306537366539393763613262316437636430363163356333 Jan 14 06:27:31.491108 systemd[1]: Started cri-containerd-888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560.scope - libcontainer container 888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560. Jan 14 06:27:31.580961 containerd[1636]: time="2026-01-14T06:27:31.580828951Z" level=info msg="connecting to shim 310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592" address="unix:///run/containerd/s/d44a439cce185f9bf107072cfa55089dae6cb0c799500752dc1f8fa2acb00535" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:31.595000 audit: BPF prog-id=189 op=LOAD Jan 14 06:27:31.598000 audit: BPF prog-id=190 op=LOAD Jan 14 06:27:31.598000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0238 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.598000 audit: BPF prog-id=190 op=UNLOAD Jan 14 06:27:31.598000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.599000 audit: BPF prog-id=191 op=LOAD Jan 14 06:27:31.599000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a0488 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.599000 audit: BPF prog-id=192 op=LOAD Jan 14 06:27:31.599000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a0218 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.599000 audit: BPF prog-id=192 op=UNLOAD Jan 14 06:27:31.599000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.599000 audit: BPF prog-id=191 op=UNLOAD Jan 14 06:27:31.599000 audit[4497]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.599000 audit: BPF prog-id=193 op=LOAD Jan 14 06:27:31.599000 audit[4497]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a06e8 a2=98 a3=0 items=0 ppid=4486 pid=4497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.599000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383139336633653665363833656333333164383865363534356631 Jan 14 06:27:31.621321 containerd[1636]: time="2026-01-14T06:27:31.620962066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r5qc2,Uid:a2db9329-e716-4806-a33b-bf27ebb68125,Namespace:kube-system,Attempt:0,} returns sandbox id \"896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b\"" Jan 14 06:27:31.644554 containerd[1636]: time="2026-01-14T06:27:31.643089130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-549d4b77bd-jwpts,Uid:e59a89b9-4020-44eb-8f82-b847f03cedae,Namespace:calico-system,Attempt:0,} returns sandbox id \"b611b0e76e997ca2b1d7cd061c5c37150f807209f5c408177c66ca1714d0db4f\"" Jan 14 06:27:31.649547 containerd[1636]: time="2026-01-14T06:27:31.649485593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:27:31.662720 containerd[1636]: time="2026-01-14T06:27:31.662657631Z" level=info msg="CreateContainer within sandbox \"896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:27:31.689970 systemd-networkd[1557]: cali391cbfdce84: Link UP Jan 14 06:27:31.693094 systemd-networkd[1557]: cali391cbfdce84: Gained carrier Jan 14 06:27:31.701031 systemd[1]: Started cri-containerd-310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592.scope - libcontainer container 310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592. Jan 14 06:27:31.747069 containerd[1636]: time="2026-01-14T06:27:31.747012034Z" level=info msg="Container be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:27:31.752000 audit: BPF prog-id=194 op=LOAD Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.025 [INFO][4354] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.130 [INFO][4354] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0 whisker-78797b75b4- calico-system dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd 970 0 2026-01-14 06:27:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78797b75b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com whisker-78797b75b4-rh22t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali391cbfdce84 [] [] }} ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.130 [INFO][4354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.366 [INFO][4422] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" HandleID="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Workload="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.366 [INFO][4422] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" HandleID="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Workload="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a090), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"whisker-78797b75b4-rh22t", "timestamp":"2026-01-14 06:27:31.366268487 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.366 [INFO][4422] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.366 [INFO][4422] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.366 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.428 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.498 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.522 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.531 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.541 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.546 [INFO][4422] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.564 [INFO][4422] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.597 [INFO][4422] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.618 [INFO][4422] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.197/26] block=192.168.12.192/26 handle="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.619 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.197/26] handle="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.619 [INFO][4422] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:31.753255 containerd[1636]: 2026-01-14 06:27:31.620 [INFO][4422] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.197/26] IPv6=[] ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" HandleID="k8s-pod-network.3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Workload="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.639 [INFO][4354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0", GenerateName:"whisker-78797b75b4-", Namespace:"calico-system", SelfLink:"", UID:"dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78797b75b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"whisker-78797b75b4-rh22t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali391cbfdce84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.640 [INFO][4354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.197/32] ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.640 [INFO][4354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali391cbfdce84 ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.695 [INFO][4354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.698 [INFO][4354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0", GenerateName:"whisker-78797b75b4-", Namespace:"calico-system", SelfLink:"", UID:"dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 27, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78797b75b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f", Pod:"whisker-78797b75b4-rh22t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali391cbfdce84", MAC:"52:99:a8:3f:c3:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:31.755661 containerd[1636]: 2026-01-14 06:27:31.731 [INFO][4354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" Namespace="calico-system" Pod="whisker-78797b75b4-rh22t" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-whisker--78797b75b4--rh22t-eth0" Jan 14 06:27:31.756000 audit: BPF prog-id=195 op=LOAD Jan 14 06:27:31.756000 audit[4554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.756000 audit: BPF prog-id=195 op=UNLOAD Jan 14 06:27:31.756000 audit[4554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.756000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.757000 audit: BPF prog-id=196 op=LOAD Jan 14 06:27:31.757000 audit[4554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.758000 audit: BPF prog-id=197 op=LOAD Jan 14 06:27:31.758000 audit[4554]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.758000 audit: BPF prog-id=197 op=UNLOAD Jan 14 06:27:31.758000 audit[4554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.759000 audit: BPF prog-id=196 op=UNLOAD Jan 14 06:27:31.759000 audit[4554]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.759000 audit: BPF prog-id=198 op=LOAD Jan 14 06:27:31.759000 audit[4554]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4535 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.759000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3331303736396365343935613964356163373835326630336565376139 Jan 14 06:27:31.774948 containerd[1636]: time="2026-01-14T06:27:31.773533283Z" level=info msg="CreateContainer within sandbox \"896c39a3a05a6ca6ae804794cc1200a5a72d12907f99f27ee55b997ce9d4a22b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b\"" Jan 14 06:27:31.775339 containerd[1636]: time="2026-01-14T06:27:31.774677203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-spmtb,Uid:e3bb8bbd-f33f-49cb-94d5-84718a161600,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"888193f3e6e683ec331d88e6545f15bd2897c11af10ffb18f7796afbec5d0560\"" Jan 14 06:27:31.779651 containerd[1636]: time="2026-01-14T06:27:31.778420122Z" level=info msg="StartContainer for \"be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b\"" Jan 14 06:27:31.788838 containerd[1636]: time="2026-01-14T06:27:31.788740190Z" level=info msg="connecting to shim be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b" address="unix:///run/containerd/s/3eb8fced99c51563da906be40e524cdc86bef14cc206406dc621d6cbb21d4bb4" protocol=ttrpc version=3 Jan 14 06:27:31.804805 systemd-networkd[1557]: caliac7c9405179: Gained IPv6LL Jan 14 06:27:31.854885 containerd[1636]: time="2026-01-14T06:27:31.854825221Z" level=info msg="connecting to shim 3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f" address="unix:///run/containerd/s/bd562fbbe7a0f5470c42263dd353ef2ea3ee7d554a5bdaa77ca5961e6d520549" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:31.855901 systemd[1]: Started cri-containerd-be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b.scope - libcontainer container be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b. Jan 14 06:27:31.866027 containerd[1636]: time="2026-01-14T06:27:31.865935965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7g86f,Uid:a5a91150-6e37-4bc7-abb4-c895c0d189ea,Namespace:calico-system,Attempt:0,} returns sandbox id \"310769ce495a9d5ac7852f03ee7a9e38a8e9d8a39a929552f58ae217907cf592\"" Jan 14 06:27:31.926000 audit: BPF prog-id=199 op=LOAD Jan 14 06:27:31.928000 audit: BPF prog-id=200 op=LOAD Jan 14 06:27:31.928000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a238 a2=98 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.930000 audit: BPF prog-id=200 op=UNLOAD Jan 14 06:27:31.930000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.937069 systemd[1]: Started cri-containerd-3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f.scope - libcontainer container 3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f. Jan 14 06:27:31.935000 audit: BPF prog-id=201 op=LOAD Jan 14 06:27:31.935000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a488 a2=98 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.937000 audit: BPF prog-id=202 op=LOAD Jan 14 06:27:31.937000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c00017a218 a2=98 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.937000 audit: BPF prog-id=202 op=UNLOAD Jan 14 06:27:31.937000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.937000 audit: BPF prog-id=201 op=UNLOAD Jan 14 06:27:31.937000 audit[4589]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.937000 audit: BPF prog-id=203 op=LOAD Jan 14 06:27:31.937000 audit[4589]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c00017a6e8 a2=98 a3=0 items=0 ppid=4400 pid=4589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:31.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265306238353032373538666263363266356666643866356336663230 Jan 14 06:27:31.999000 audit: BPF prog-id=204 op=LOAD Jan 14 06:27:32.001852 containerd[1636]: time="2026-01-14T06:27:32.000862555Z" level=info msg="StartContainer for \"be0b8502758fbc62f5ffd8f5c6f203c49af1ca76c1fd7f093040a3b6c6e65a7b\" returns successfully" Jan 14 06:27:32.002000 audit: BPF prog-id=205 op=LOAD Jan 14 06:27:32.002000 audit[4657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106238 a2=98 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.002000 audit: BPF prog-id=205 op=UNLOAD Jan 14 06:27:32.002000 audit[4657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.003000 audit: BPF prog-id=206 op=LOAD Jan 14 06:27:32.003000 audit[4657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000106488 a2=98 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.003000 audit: BPF prog-id=207 op=LOAD Jan 14 06:27:32.003000 audit[4657]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000106218 a2=98 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.003000 audit: BPF prog-id=207 op=UNLOAD Jan 14 06:27:32.003000 audit[4657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.003000 audit: BPF prog-id=206 op=UNLOAD Jan 14 06:27:32.003000 audit[4657]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.003000 audit: BPF prog-id=208 op=LOAD Jan 14 06:27:32.003000 audit[4657]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001066e8 a2=98 a3=0 items=0 ppid=4625 pid=4657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365303763613233626338356162633335343563626664326635326637 Jan 14 06:27:32.053396 containerd[1636]: time="2026-01-14T06:27:32.053305573Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:32.057160 containerd[1636]: time="2026-01-14T06:27:32.056415633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:27:32.057780 containerd[1636]: time="2026-01-14T06:27:32.056678079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:32.066675 kubelet[2961]: E0114 06:27:32.065734 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:27:32.089741 kubelet[2961]: E0114 06:27:32.067370 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:27:32.089954 containerd[1636]: time="2026-01-14T06:27:32.073616197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:27:32.090070 kubelet[2961]: E0114 06:27:32.077020 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:32.090070 kubelet[2961]: E0114 06:27:32.078778 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:27:32.105113 containerd[1636]: time="2026-01-14T06:27:32.104709634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78797b75b4-rh22t,Uid:dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e07ca23bc85abc3545cbfd2f52f7ee57ef6e8aa490ccdc42f7624ac01248b9f\"" Jan 14 06:27:32.443897 systemd-networkd[1557]: cali13af0846417: Gained IPv6LL Jan 14 06:27:32.518903 containerd[1636]: time="2026-01-14T06:27:32.518826970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:32.520825 containerd[1636]: time="2026-01-14T06:27:32.520746377Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:27:32.523527 kubelet[2961]: E0114 06:27:32.523422 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:32.524749 kubelet[2961]: E0114 06:27:32.523671 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:32.525357 kubelet[2961]: E0114 06:27:32.525207 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkclb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:32.527107 kubelet[2961]: E0114 06:27:32.526687 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:27:32.534715 containerd[1636]: time="2026-01-14T06:27:32.520784303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:32.534715 containerd[1636]: time="2026-01-14T06:27:32.526868002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:27:32.573154 systemd-networkd[1557]: cali24ee27dbaa0: Gained IPv6LL Jan 14 06:27:32.635843 systemd-networkd[1557]: cali7552d5b920c: Gained IPv6LL Jan 14 06:27:32.772680 kubelet[2961]: E0114 06:27:32.772244 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:27:32.772680 kubelet[2961]: E0114 06:27:32.772434 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:27:32.856095 kubelet[2961]: I0114 06:27:32.855929 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r5qc2" podStartSLOduration=58.849759132 podStartE2EDuration="58.849759132s" podCreationTimestamp="2026-01-14 06:26:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:27:32.80989096 +0000 UTC m=+62.795650516" watchObservedRunningTime="2026-01-14 06:27:32.849759132 +0000 UTC m=+62.835518671" Jan 14 06:27:32.880000 audit[4790]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:32.880000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fffc8d11c20 a2=0 a3=7fffc8d11c0c items=0 ppid=3077 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.880000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:32.886000 audit[4790]: NETFILTER_CFG table=nat:122 family=2 entries=14 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:32.886000 audit[4790]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffc8d11c20 a2=0 a3=0 items=0 ppid=3077 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.886000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:32.888967 containerd[1636]: time="2026-01-14T06:27:32.888870061Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:32.893537 containerd[1636]: time="2026-01-14T06:27:32.893489210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:27:32.893745 containerd[1636]: time="2026-01-14T06:27:32.893702371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:32.898474 kubelet[2961]: E0114 06:27:32.897064 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:27:32.898909 kubelet[2961]: E0114 06:27:32.898874 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:27:32.901752 kubelet[2961]: E0114 06:27:32.899435 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:32.901941 containerd[1636]: time="2026-01-14T06:27:32.901383381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:27:32.954000 audit[4795]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:32.954000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffdabdabc90 a2=0 a3=7ffdabdabc7c items=0 ppid=3077 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:32.960000 audit[4795]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:32.960000 audit[4795]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffdabdabc90 a2=0 a3=0 items=0 ppid=3077 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:32.988000 audit: BPF prog-id=209 op=LOAD Jan 14 06:27:32.988000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc54fb12d0 a2=98 a3=1fffffffffffffff items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.988000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.988000 audit: BPF prog-id=209 op=UNLOAD Jan 14 06:27:32.988000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffc54fb12a0 a3=0 items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.988000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.989000 audit: BPF prog-id=210 op=LOAD Jan 14 06:27:32.989000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc54fb11b0 a2=94 a3=3 items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.989000 audit: BPF prog-id=210 op=UNLOAD Jan 14 06:27:32.989000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc54fb11b0 a2=94 a3=3 items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.989000 audit: BPF prog-id=211 op=LOAD Jan 14 06:27:32.989000 audit[4801]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc54fb11f0 a2=94 a3=7ffc54fb13d0 items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.989000 audit: BPF prog-id=211 op=UNLOAD Jan 14 06:27:32.989000 audit[4801]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffc54fb11f0 a2=94 a3=7ffc54fb13d0 items=0 ppid=4675 pid=4801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.989000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 06:27:32.994000 audit: BPF prog-id=212 op=LOAD Jan 14 06:27:32.994000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe669c5770 a2=98 a3=3 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:32.994000 audit: BPF prog-id=212 op=UNLOAD Jan 14 06:27:32.994000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe669c5740 a3=0 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.994000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:32.995000 audit: BPF prog-id=213 op=LOAD Jan 14 06:27:32.995000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe669c5560 a2=94 a3=54428f items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:32.995000 audit: BPF prog-id=213 op=UNLOAD Jan 14 06:27:32.995000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe669c5560 a2=94 a3=54428f items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:32.995000 audit: BPF prog-id=214 op=LOAD Jan 14 06:27:32.995000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe669c5590 a2=94 a3=2 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:32.995000 audit: BPF prog-id=214 op=UNLOAD Jan 14 06:27:32.995000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe669c5590 a2=0 a3=2 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:32.995000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.239000 audit: BPF prog-id=215 op=LOAD Jan 14 06:27:33.239000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe669c5450 a2=94 a3=1 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.239000 audit: BPF prog-id=215 op=UNLOAD Jan 14 06:27:33.239000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe669c5450 a2=94 a3=1 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.239000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.248624 containerd[1636]: time="2026-01-14T06:27:33.248529502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:33.250096 containerd[1636]: time="2026-01-14T06:27:33.249996489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:27:33.250096 containerd[1636]: time="2026-01-14T06:27:33.250056407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:33.250531 kubelet[2961]: E0114 06:27:33.250466 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:27:33.251128 kubelet[2961]: E0114 06:27:33.250592 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:27:33.251128 kubelet[2961]: E0114 06:27:33.251078 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d862720089aa46bd9f413e550c532138,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:33.252098 containerd[1636]: time="2026-01-14T06:27:33.252061584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:27:33.261000 audit: BPF prog-id=216 op=LOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe669c5440 a2=94 a3=4 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.261000 audit: BPF prog-id=216 op=UNLOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe669c5440 a2=0 a3=4 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.261000 audit: BPF prog-id=217 op=LOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe669c52a0 a2=94 a3=5 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.261000 audit: BPF prog-id=217 op=UNLOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe669c52a0 a2=0 a3=5 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.261000 audit: BPF prog-id=218 op=LOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe669c54c0 a2=94 a3=6 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.261000 audit: BPF prog-id=218 op=UNLOAD Jan 14 06:27:33.261000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe669c54c0 a2=0 a3=6 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.261000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.262000 audit: BPF prog-id=219 op=LOAD Jan 14 06:27:33.262000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe669c4c70 a2=94 a3=88 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.262000 audit: BPF prog-id=220 op=LOAD Jan 14 06:27:33.262000 audit[4802]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe669c4af0 a2=94 a3=2 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.262000 audit: BPF prog-id=220 op=UNLOAD Jan 14 06:27:33.262000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe669c4b20 a2=0 a3=7ffe669c4c20 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.262000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.263000 audit: BPF prog-id=219 op=UNLOAD Jan 14 06:27:33.263000 audit[4802]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=3f0d0d10 a2=0 a3=45e207c1d41296a1 items=0 ppid=4675 pid=4802 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.263000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 06:27:33.296000 audit: BPF prog-id=221 op=LOAD Jan 14 06:27:33.296000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff726d02f0 a2=98 a3=1999999999999999 items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.296000 audit: BPF prog-id=221 op=UNLOAD Jan 14 06:27:33.296000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7fff726d02c0 a3=0 items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.296000 audit: BPF prog-id=222 op=LOAD Jan 14 06:27:33.296000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff726d01d0 a2=94 a3=ffff items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.296000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.297000 audit: BPF prog-id=222 op=UNLOAD Jan 14 06:27:33.297000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff726d01d0 a2=94 a3=ffff items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.297000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.297000 audit: BPF prog-id=223 op=LOAD Jan 14 06:27:33.297000 audit[4806]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff726d0210 a2=94 a3=7fff726d03f0 items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.297000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.297000 audit: BPF prog-id=223 op=UNLOAD Jan 14 06:27:33.297000 audit[4806]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7fff726d0210 a2=94 a3=7fff726d03f0 items=0 ppid=4675 pid=4806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.297000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 06:27:33.404047 systemd-networkd[1557]: cali391cbfdce84: Gained IPv6LL Jan 14 06:27:33.453133 systemd-networkd[1557]: vxlan.calico: Link UP Jan 14 06:27:33.453147 systemd-networkd[1557]: vxlan.calico: Gained carrier Jan 14 06:27:33.513417 kernel: kauditd_printk_skb: 239 callbacks suppressed Jan 14 06:27:33.514127 kernel: audit: type=1334 audit(1768372053.504:675): prog-id=224 op=LOAD Jan 14 06:27:33.504000 audit: BPF prog-id=224 op=LOAD Jan 14 06:27:33.504000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce6973ab0 a2=98 a3=20 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.524440 kernel: audit: type=1300 audit(1768372053.504:675): arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce6973ab0 a2=98 a3=20 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.504000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.506000 audit: BPF prog-id=224 op=UNLOAD Jan 14 06:27:33.530725 kernel: audit: type=1327 audit(1768372053.504:675): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.530820 kernel: audit: type=1334 audit(1768372053.506:676): prog-id=224 op=UNLOAD Jan 14 06:27:33.506000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce6973a80 a3=0 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.544592 kernel: audit: type=1300 audit(1768372053.506:676): arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffce6973a80 a3=0 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.557073 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Jan 14 06:27:33.557251 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Jan 14 06:27:33.557305 kernel: audit: backlog limit exceeded Jan 14 06:27:33.557414 kernel: audit: type=1327 audit(1768372053.506:676): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.506000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.562955 kernel: audit: type=1334 audit(1768372053.510:677): prog-id=225 op=LOAD Jan 14 06:27:33.510000 audit: BPF prog-id=225 op=LOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce69738c0 a2=94 a3=54428f items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=225 op=UNLOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce69738c0 a2=94 a3=54428f items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=226 op=LOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffce69738f0 a2=94 a3=2 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=226 op=UNLOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=7ffce69738f0 a2=0 a3=2 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=227 op=LOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce69736a0 a2=94 a3=4 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=227 op=UNLOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce69736a0 a2=94 a3=4 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=228 op=LOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce69737a0 a2=94 a3=7ffce6973920 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.510000 audit: BPF prog-id=228 op=UNLOAD Jan 14 06:27:33.510000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce69737a0 a2=0 a3=7ffce6973920 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.510000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.511000 audit: BPF prog-id=229 op=LOAD Jan 14 06:27:33.511000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce6972ed0 a2=94 a3=2 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.511000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.514000 audit: BPF prog-id=229 op=UNLOAD Jan 14 06:27:33.514000 audit[4829]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffce6972ed0 a2=0 a3=2 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.514000 audit: BPF prog-id=230 op=LOAD Jan 14 06:27:33.514000 audit[4829]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffce6972fd0 a2=94 a3=30 items=0 ppid=4675 pid=4829 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.514000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 06:27:33.547000 audit: BPF prog-id=231 op=LOAD Jan 14 06:27:33.547000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe42443b10 a2=98 a3=0 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.547000 audit: BPF prog-id=231 op=UNLOAD Jan 14 06:27:33.547000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=3 a1=8 a2=7ffe42443ae0 a3=0 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.547000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.548000 audit: BPF prog-id=232 op=LOAD Jan 14 06:27:33.548000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe42443900 a2=94 a3=54428f items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.548000 audit: BPF prog-id=232 op=UNLOAD Jan 14 06:27:33.548000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe42443900 a2=94 a3=54428f items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.548000 audit: BPF prog-id=233 op=LOAD Jan 14 06:27:33.548000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe42443930 a2=94 a3=2 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.548000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.684074 containerd[1636]: time="2026-01-14T06:27:33.683668683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:33.685609 containerd[1636]: time="2026-01-14T06:27:33.685462742Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:27:33.685849 containerd[1636]: time="2026-01-14T06:27:33.685535600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:33.686410 kubelet[2961]: E0114 06:27:33.686263 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:27:33.686984 kubelet[2961]: E0114 06:27:33.686450 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:27:33.686984 kubelet[2961]: E0114 06:27:33.686858 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:33.688887 containerd[1636]: time="2026-01-14T06:27:33.688857089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:27:33.689542 kubelet[2961]: E0114 06:27:33.689380 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:33.777294 kubelet[2961]: E0114 06:27:33.775652 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:27:33.779885 kubelet[2961]: E0114 06:27:33.779448 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:33.893000 audit: BPF prog-id=234 op=LOAD Jan 14 06:27:33.893000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe424437f0 a2=94 a3=1 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.894000 audit: BPF prog-id=234 op=UNLOAD Jan 14 06:27:33.894000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=4 a1=7ffe424437f0 a2=94 a3=1 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.894000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.910000 audit: BPF prog-id=235 op=LOAD Jan 14 06:27:33.910000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe424437e0 a2=94 a3=4 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.910000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.910000 audit: BPF prog-id=235 op=UNLOAD Jan 14 06:27:33.910000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe424437e0 a2=0 a3=4 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.910000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.911000 audit: BPF prog-id=236 op=LOAD Jan 14 06:27:33.911000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe42443640 a2=94 a3=5 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.911000 audit: BPF prog-id=236 op=UNLOAD Jan 14 06:27:33.911000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=6 a1=7ffe42443640 a2=0 a3=5 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.911000 audit: BPF prog-id=237 op=LOAD Jan 14 06:27:33.911000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe42443860 a2=94 a3=6 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.911000 audit: BPF prog-id=237 op=UNLOAD Jan 14 06:27:33.911000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=7ffe42443860 a2=0 a3=6 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.911000 audit: BPF prog-id=238 op=LOAD Jan 14 06:27:33.911000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffe42443010 a2=94 a3=88 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.911000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.913000 audit: BPF prog-id=239 op=LOAD Jan 14 06:27:33.913000 audit[4843]: SYSCALL arch=c000003e syscall=321 success=yes exit=7 a0=5 a1=7ffe42442e90 a2=94 a3=2 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.913000 audit: BPF prog-id=239 op=UNLOAD Jan 14 06:27:33.913000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=7 a1=7ffe42442ec0 a2=0 a3=7ffe42442fc0 items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.913000 audit: BPF prog-id=238 op=UNLOAD Jan 14 06:27:33.913000 audit[4843]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=5 a1=1e7b1d10 a2=0 a3=d1a04963f252483c items=0 ppid=4675 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.913000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 06:27:33.924000 audit: BPF prog-id=230 op=UNLOAD Jan 14 06:27:33.924000 audit[4675]: SYSCALL arch=c000003e syscall=263 success=yes exit=0 a0=ffffffffffffff9c a1=c0011e3980 a2=0 a3=0 items=0 ppid=4592 pid=4675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.924000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 06:27:33.991000 audit[4864]: NETFILTER_CFG table=filter:125 family=2 entries=17 op=nft_register_rule pid=4864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:33.991000 audit[4864]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff63bcbe00 a2=0 a3=7fff63bcbdec items=0 ppid=3077 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:33.996000 audit[4864]: NETFILTER_CFG table=nat:126 family=2 entries=35 op=nft_register_chain pid=4864 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:33.996000 audit[4864]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7fff63bcbe00 a2=0 a3=7fff63bcbdec items=0 ppid=3077 pid=4864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:33.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:34.057000 audit[4873]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4873 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:34.057000 audit[4873]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd1cbb1f50 a2=0 a3=7ffd1cbb1f3c items=0 ppid=4675 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:34.057000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:34.058000 audit[4877]: NETFILTER_CFG table=mangle:128 family=2 entries=16 op=nft_register_chain pid=4877 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:34.058000 audit[4877]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffd4d615650 a2=0 a3=7ffd4d61563c items=0 ppid=4675 pid=4877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:34.058000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:34.061000 audit[4880]: NETFILTER_CFG table=nat:129 family=2 entries=15 op=nft_register_chain pid=4880 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:34.061000 audit[4880]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe3dfb32c0 a2=0 a3=7ffe3dfb32ac items=0 ppid=4675 pid=4880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:34.061000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:34.077619 containerd[1636]: time="2026-01-14T06:27:34.077531631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:34.078924 containerd[1636]: time="2026-01-14T06:27:34.078821501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:27:34.079186 containerd[1636]: time="2026-01-14T06:27:34.078846856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:34.079295 kubelet[2961]: E0114 06:27:34.079228 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:27:34.079393 kubelet[2961]: E0114 06:27:34.079332 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:27:34.080034 kubelet[2961]: E0114 06:27:34.079617 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:34.081379 kubelet[2961]: E0114 06:27:34.081308 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:27:34.068000 audit[4875]: NETFILTER_CFG table=filter:130 family=2 entries=226 op=nft_register_chain pid=4875 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:34.068000 audit[4875]: SYSCALL arch=c000003e syscall=46 success=yes exit=131428 a0=3 a1=7ffd4bd6bb70 a2=0 a3=7ffd4bd6bb5c items=0 ppid=4675 pid=4875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:34.068000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:34.555937 systemd-networkd[1557]: vxlan.calico: Gained IPv6LL Jan 14 06:27:34.780909 kubelet[2961]: E0114 06:27:34.780735 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:27:35.036000 audit[4892]: NETFILTER_CFG table=filter:131 family=2 entries=14 op=nft_register_rule pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:35.036000 audit[4892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe6fcb4930 a2=0 a3=7ffe6fcb491c items=0 ppid=3077 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:35.036000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:35.042000 audit[4892]: NETFILTER_CFG table=nat:132 family=2 entries=20 op=nft_register_rule pid=4892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:35.042000 audit[4892]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe6fcb4930 a2=0 a3=7ffe6fcb491c items=0 ppid=3077 pid=4892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:35.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:37.285157 containerd[1636]: time="2026-01-14T06:27:37.284282757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,}" Jan 14 06:27:37.493998 systemd-networkd[1557]: cali3197a5d6184: Link UP Jan 14 06:27:37.495617 systemd-networkd[1557]: cali3197a5d6184: Gained carrier Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.368 [INFO][4897] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0 calico-apiserver-545d979dcd- calico-apiserver b2f6e747-eff5-4e8e-b242-bf44361cfc2b 878 0 2026-01-14 06:26:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:545d979dcd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com calico-apiserver-545d979dcd-jdf9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3197a5d6184 [] [] }} ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.369 [INFO][4897] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.422 [INFO][4908] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" HandleID="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.422 [INFO][4908] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" HandleID="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-i1yja.gb1.brightbox.com", "pod":"calico-apiserver-545d979dcd-jdf9d", "timestamp":"2026-01-14 06:27:37.422398148 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.422 [INFO][4908] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.423 [INFO][4908] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.423 [INFO][4908] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.441 [INFO][4908] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.451 [INFO][4908] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.458 [INFO][4908] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.461 [INFO][4908] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.465 [INFO][4908] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.465 [INFO][4908] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.468 [INFO][4908] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0 Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.475 [INFO][4908] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.483 [INFO][4908] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.198/26] block=192.168.12.192/26 handle="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.483 [INFO][4908] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.198/26] handle="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.483 [INFO][4908] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:37.514061 containerd[1636]: 2026-01-14 06:27:37.483 [INFO][4908] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.198/26] IPv6=[] ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" HandleID="k8s-pod-network.949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Workload="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.487 [INFO][4897] cni-plugin/k8s.go 418: Populated endpoint ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0", GenerateName:"calico-apiserver-545d979dcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2f6e747-eff5-4e8e-b242-bf44361cfc2b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545d979dcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-545d979dcd-jdf9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3197a5d6184", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.487 [INFO][4897] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.198/32] ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.488 [INFO][4897] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3197a5d6184 ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.491 [INFO][4897] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.491 [INFO][4897] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0", GenerateName:"calico-apiserver-545d979dcd-", Namespace:"calico-apiserver", SelfLink:"", UID:"b2f6e747-eff5-4e8e-b242-bf44361cfc2b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"545d979dcd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0", Pod:"calico-apiserver-545d979dcd-jdf9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3197a5d6184", MAC:"62:06:38:5a:a1:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:37.517057 containerd[1636]: 2026-01-14 06:27:37.506 [INFO][4897] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" Namespace="calico-apiserver" Pod="calico-apiserver-545d979dcd-jdf9d" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-calico--apiserver--545d979dcd--jdf9d-eth0" Jan 14 06:27:37.554000 audit[4927]: NETFILTER_CFG table=filter:133 family=2 entries=59 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:37.554000 audit[4927]: SYSCALL arch=c000003e syscall=46 success=yes exit=29492 a0=3 a1=7ffcae97cf30 a2=0 a3=7ffcae97cf1c items=0 ppid=4675 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.554000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:37.558469 containerd[1636]: time="2026-01-14T06:27:37.558368493Z" level=info msg="connecting to shim 949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0" address="unix:///run/containerd/s/5ccf541953ab39bec8124f2d02295d761ab9935ae8ab5007a9a9f1e08878adf9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:37.604005 systemd[1]: Started cri-containerd-949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0.scope - libcontainer container 949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0. Jan 14 06:27:37.629000 audit: BPF prog-id=240 op=LOAD Jan 14 06:27:37.630000 audit: BPF prog-id=241 op=LOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138238 a2=98 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=241 op=UNLOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=242 op=LOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000138488 a2=98 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=243 op=LOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000138218 a2=98 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=243 op=UNLOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=242 op=UNLOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.630000 audit: BPF prog-id=244 op=LOAD Jan 14 06:27:37.630000 audit[4945]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001386e8 a2=98 a3=0 items=0 ppid=4933 pid=4945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:37.630000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934396330373864613237303362633361633831666635333535323663 Jan 14 06:27:37.690371 containerd[1636]: time="2026-01-14T06:27:37.690327011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-545d979dcd-jdf9d,Uid:b2f6e747-eff5-4e8e-b242-bf44361cfc2b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"949c078da2703bc3ac81ff535526c770fa90c81455080cadfb06fe888af238b0\"" Jan 14 06:27:37.694270 containerd[1636]: time="2026-01-14T06:27:37.694234607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:27:38.028752 containerd[1636]: time="2026-01-14T06:27:38.028456468Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:38.030402 containerd[1636]: time="2026-01-14T06:27:38.030328336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:27:38.030500 containerd[1636]: time="2026-01-14T06:27:38.030437977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:38.030790 kubelet[2961]: E0114 06:27:38.030739 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:38.031936 kubelet[2961]: E0114 06:27:38.031340 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:38.031936 kubelet[2961]: E0114 06:27:38.031555 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv9nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:38.033109 kubelet[2961]: E0114 06:27:38.033046 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:27:38.285013 containerd[1636]: time="2026-01-14T06:27:38.284519433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,}" Jan 14 06:27:38.553197 systemd-networkd[1557]: cali1f4ea17a85f: Link UP Jan 14 06:27:38.554326 systemd-networkd[1557]: cali1f4ea17a85f: Gained carrier Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.370 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0 coredns-674b8bbfcf- kube-system ad3b96aa-084d-4569-8a6a-059f7da03c00 877 0 2026-01-14 06:26:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com coredns-674b8bbfcf-dj98n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1f4ea17a85f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.370 [INFO][4971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.424 [INFO][4982] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" HandleID="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.424 [INFO][4982] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" HandleID="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5940), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-dj98n", "timestamp":"2026-01-14 06:27:38.424225217 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.424 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.424 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.424 [INFO][4982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.462 [INFO][4982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.507 [INFO][4982] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.515 [INFO][4982] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.519 [INFO][4982] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.522 [INFO][4982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.522 [INFO][4982] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.525 [INFO][4982] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803 Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.532 [INFO][4982] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.542 [INFO][4982] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.199/26] block=192.168.12.192/26 handle="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.542 [INFO][4982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.199/26] handle="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.542 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:38.585589 containerd[1636]: 2026-01-14 06:27:38.542 [INFO][4982] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.199/26] IPv6=[] ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" HandleID="k8s-pod-network.529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Workload="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.547 [INFO][4971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ad3b96aa-084d-4569-8a6a-059f7da03c00", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-dj98n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f4ea17a85f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.547 [INFO][4971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.199/32] ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.547 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f4ea17a85f ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.555 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.555 [INFO][4971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ad3b96aa-084d-4569-8a6a-059f7da03c00", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803", Pod:"coredns-674b8bbfcf-dj98n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1f4ea17a85f", MAC:"82:50:b5:50:00:59", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:38.587241 containerd[1636]: 2026-01-14 06:27:38.580 [INFO][4971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" Namespace="kube-system" Pod="coredns-674b8bbfcf-dj98n" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-coredns--674b8bbfcf--dj98n-eth0" Jan 14 06:27:38.632000 audit[5004]: NETFILTER_CFG table=filter:134 family=2 entries=54 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:38.635047 containerd[1636]: time="2026-01-14T06:27:38.634243203Z" level=info msg="connecting to shim 529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803" address="unix:///run/containerd/s/933d266d64da55d7251f68ede1e7f93bf7f568e4daa451d072c44810d52e2ee1" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:38.635442 kernel: kauditd_printk_skb: 135 callbacks suppressed Jan 14 06:27:38.635538 kernel: audit: type=1325 audit(1768372058.632:723): table=filter:134 family=2 entries=54 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:38.632000 audit[5004]: SYSCALL arch=c000003e syscall=46 success=yes exit=25556 a0=3 a1=7fffa7e2ef60 a2=0 a3=7fffa7e2ef4c items=0 ppid=4675 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.641064 kernel: audit: type=1300 audit(1768372058.632:723): arch=c000003e syscall=46 success=yes exit=25556 a0=3 a1=7fffa7e2ef60 a2=0 a3=7fffa7e2ef4c items=0 ppid=4675 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.632000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:38.646134 kernel: audit: type=1327 audit(1768372058.632:723): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:38.697859 systemd[1]: Started cri-containerd-529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803.scope - libcontainer container 529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803. Jan 14 06:27:38.723000 audit: BPF prog-id=245 op=LOAD Jan 14 06:27:38.726634 kernel: audit: type=1334 audit(1768372058.723:724): prog-id=245 op=LOAD Jan 14 06:27:38.726725 kernel: audit: type=1334 audit(1768372058.725:725): prog-id=246 op=LOAD Jan 14 06:27:38.725000 audit: BPF prog-id=246 op=LOAD Jan 14 06:27:38.725000 audit[5021]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729789 kernel: audit: type=1300 audit(1768372058.725:725): arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.739618 kernel: audit: type=1327 audit(1768372058.725:725): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.725000 audit: BPF prog-id=246 op=UNLOAD Jan 14 06:27:38.742674 kernel: audit: type=1334 audit(1768372058.725:726): prog-id=246 op=UNLOAD Jan 14 06:27:38.725000 audit[5021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.725000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.750210 kernel: audit: type=1300 audit(1768372058.725:726): arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.750304 kernel: audit: type=1327 audit(1768372058.725:726): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.729000 audit: BPF prog-id=247 op=LOAD Jan 14 06:27:38.729000 audit[5021]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.729000 audit: BPF prog-id=248 op=LOAD Jan 14 06:27:38.729000 audit[5021]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.729000 audit: BPF prog-id=248 op=UNLOAD Jan 14 06:27:38.729000 audit[5021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.729000 audit: BPF prog-id=247 op=UNLOAD Jan 14 06:27:38.729000 audit[5021]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.729000 audit: BPF prog-id=249 op=LOAD Jan 14 06:27:38.729000 audit[5021]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5010 pid=5021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532396436323664636137326335366434376138643834646632396266 Jan 14 06:27:38.796708 kubelet[2961]: E0114 06:27:38.795126 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:27:38.821471 containerd[1636]: time="2026-01-14T06:27:38.821287767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dj98n,Uid:ad3b96aa-084d-4569-8a6a-059f7da03c00,Namespace:kube-system,Attempt:0,} returns sandbox id \"529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803\"" Jan 14 06:27:38.832771 containerd[1636]: time="2026-01-14T06:27:38.832706008Z" level=info msg="CreateContainer within sandbox \"529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 06:27:38.854159 containerd[1636]: time="2026-01-14T06:27:38.854040695Z" level=info msg="Container fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3: CDI devices from CRI Config.CDIDevices: []" Jan 14 06:27:38.864834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919374949.mount: Deactivated successfully. Jan 14 06:27:38.870672 containerd[1636]: time="2026-01-14T06:27:38.869811199Z" level=info msg="CreateContainer within sandbox \"529d626dca72c56d47a8d84df29bf57c7a2b553bc4b2ae5ba1a1b31e162af803\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3\"" Jan 14 06:27:38.872121 containerd[1636]: time="2026-01-14T06:27:38.871787801Z" level=info msg="StartContainer for \"fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3\"" Jan 14 06:27:38.873627 containerd[1636]: time="2026-01-14T06:27:38.873576286Z" level=info msg="connecting to shim fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3" address="unix:///run/containerd/s/933d266d64da55d7251f68ede1e7f93bf7f568e4daa451d072c44810d52e2ee1" protocol=ttrpc version=3 Jan 14 06:27:38.907094 systemd[1]: Started cri-containerd-fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3.scope - libcontainer container fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3. Jan 14 06:27:38.908000 audit[5059]: NETFILTER_CFG table=filter:135 family=2 entries=14 op=nft_register_rule pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:38.908000 audit[5059]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe9f90d340 a2=0 a3=7ffe9f90d32c items=0 ppid=3077 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:38.913000 audit[5059]: NETFILTER_CFG table=nat:136 family=2 entries=20 op=nft_register_rule pid=5059 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:38.913000 audit[5059]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe9f90d340 a2=0 a3=7ffe9f90d32c items=0 ppid=3077 pid=5059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:38.934000 audit: BPF prog-id=250 op=LOAD Jan 14 06:27:38.935000 audit: BPF prog-id=251 op=LOAD Jan 14 06:27:38.935000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130238 a2=98 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.935000 audit: BPF prog-id=251 op=UNLOAD Jan 14 06:27:38.935000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.936000 audit: BPF prog-id=252 op=LOAD Jan 14 06:27:38.936000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c000130488 a2=98 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.936000 audit: BPF prog-id=253 op=LOAD Jan 14 06:27:38.936000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=23 a0=5 a1=c000130218 a2=98 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.936000 audit: BPF prog-id=253 op=UNLOAD Jan 14 06:27:38.936000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.936000 audit: BPF prog-id=252 op=UNLOAD Jan 14 06:27:38.936000 audit[5047]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.936000 audit: BPF prog-id=254 op=LOAD Jan 14 06:27:38.936000 audit[5047]: SYSCALL arch=c000003e syscall=321 success=yes exit=21 a0=5 a1=c0001306e8 a2=98 a3=0 items=0 ppid=5010 pid=5047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:38.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661303930636435356134353331376232393530626137353737653135 Jan 14 06:27:38.966692 containerd[1636]: time="2026-01-14T06:27:38.966538811Z" level=info msg="StartContainer for \"fa090cd55a45317b2950ba7577e15f0dd6d43724ccd27a26ec2864dc310fa2f3\" returns successfully" Jan 14 06:27:39.227834 systemd-networkd[1557]: cali3197a5d6184: Gained IPv6LL Jan 14 06:27:39.848322 kubelet[2961]: I0114 06:27:39.847947 2961 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dj98n" podStartSLOduration=64.847891772 podStartE2EDuration="1m4.847891772s" podCreationTimestamp="2026-01-14 06:26:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 06:27:39.836176138 +0000 UTC m=+69.821935682" watchObservedRunningTime="2026-01-14 06:27:39.847891772 +0000 UTC m=+69.833651309" Jan 14 06:27:39.891000 audit[5085]: NETFILTER_CFG table=filter:137 family=2 entries=14 op=nft_register_rule pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:39.891000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffc7bac0b60 a2=0 a3=7ffc7bac0b4c items=0 ppid=3077 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:39.891000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:39.898000 audit[5085]: NETFILTER_CFG table=nat:138 family=2 entries=44 op=nft_register_rule pid=5085 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:39.898000 audit[5085]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc7bac0b60 a2=0 a3=7ffc7bac0b4c items=0 ppid=3077 pid=5085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:39.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:39.932402 systemd-networkd[1557]: cali1f4ea17a85f: Gained IPv6LL Jan 14 06:27:39.929000 audit[5087]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:39.929000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe34fc29c0 a2=0 a3=7ffe34fc29ac items=0 ppid=3077 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:39.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:39.949000 audit[5087]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:39.949000 audit[5087]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe34fc29c0 a2=0 a3=7ffe34fc29ac items=0 ppid=3077 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:39.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:43.284792 containerd[1636]: time="2026-01-14T06:27:43.284609907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,}" Jan 14 06:27:43.508253 systemd-networkd[1557]: cali7c46b7acf86: Link UP Jan 14 06:27:43.509882 systemd-networkd[1557]: cali7c46b7acf86: Gained carrier Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.359 [INFO][5092] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0 goldmane-666569f655- calico-system c58c893f-2e4d-4df6-aa40-06b84b7b6bbc 874 0 2026-01-14 06:26:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-i1yja.gb1.brightbox.com goldmane-666569f655-brmbk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c46b7acf86 [] [] }} ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.359 [INFO][5092] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.414 [INFO][5104] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" HandleID="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Workload="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.414 [INFO][5104] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" HandleID="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Workload="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024ef50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-i1yja.gb1.brightbox.com", "pod":"goldmane-666569f655-brmbk", "timestamp":"2026-01-14 06:27:43.414017592 +0000 UTC"}, Hostname:"srv-i1yja.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.414 [INFO][5104] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.414 [INFO][5104] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.414 [INFO][5104] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-i1yja.gb1.brightbox.com' Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.442 [INFO][5104] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.466 [INFO][5104] ipam/ipam.go 394: Looking up existing affinities for host host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.472 [INFO][5104] ipam/ipam.go 511: Trying affinity for 192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.475 [INFO][5104] ipam/ipam.go 158: Attempting to load block cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.479 [INFO][5104] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.12.192/26 host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.479 [INFO][5104] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.12.192/26 handle="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.481 [INFO][5104] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052 Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.487 [INFO][5104] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.12.192/26 handle="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.496 [INFO][5104] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.12.200/26] block=192.168.12.192/26 handle="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.497 [INFO][5104] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.12.200/26] handle="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" host="srv-i1yja.gb1.brightbox.com" Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.497 [INFO][5104] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 06:27:43.540609 containerd[1636]: 2026-01-14 06:27:43.497 [INFO][5104] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.12.200/26] IPv6=[] ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" HandleID="k8s-pod-network.0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Workload="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.502 [INFO][5092] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-brmbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c46b7acf86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.502 [INFO][5092] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.200/32] ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.502 [INFO][5092] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c46b7acf86 ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.509 [INFO][5092] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.511 [INFO][5092] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"c58c893f-2e4d-4df6-aa40-06b84b7b6bbc", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 6, 26, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-i1yja.gb1.brightbox.com", ContainerID:"0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052", Pod:"goldmane-666569f655-brmbk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c46b7acf86", MAC:"3a:a4:0c:ab:67:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 06:27:43.541553 containerd[1636]: 2026-01-14 06:27:43.534 [INFO][5092] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" Namespace="calico-system" Pod="goldmane-666569f655-brmbk" WorkloadEndpoint="srv--i1yja.gb1.brightbox.com-k8s-goldmane--666569f655--brmbk-eth0" Jan 14 06:27:43.586000 audit[5124]: NETFILTER_CFG table=filter:141 family=2 entries=66 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 06:27:43.586000 audit[5124]: SYSCALL arch=c000003e syscall=46 success=yes exit=32752 a0=3 a1=7ffeb29de1a0 a2=0 a3=7ffeb29de18c items=0 ppid=4675 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.586000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 06:27:43.604626 containerd[1636]: time="2026-01-14T06:27:43.604353822Z" level=info msg="connecting to shim 0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052" address="unix:///run/containerd/s/87aae76b9d7b39b5670e74768ece582179dfdccad9a07c6b7da971ccc123b253" namespace=k8s.io protocol=ttrpc version=3 Jan 14 06:27:43.657903 systemd[1]: Started cri-containerd-0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052.scope - libcontainer container 0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052. Jan 14 06:27:43.687734 kernel: kauditd_printk_skb: 58 callbacks suppressed Jan 14 06:27:43.688084 kernel: audit: type=1334 audit(1768372063.680:747): prog-id=255 op=LOAD Jan 14 06:27:43.680000 audit: BPF prog-id=255 op=LOAD Jan 14 06:27:43.688000 audit: BPF prog-id=256 op=LOAD Jan 14 06:27:43.688000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693316 kernel: audit: type=1334 audit(1768372063.688:748): prog-id=256 op=LOAD Jan 14 06:27:43.693399 kernel: audit: type=1300 audit(1768372063.688:748): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8238 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.698857 kernel: audit: type=1327 audit(1768372063.688:748): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=256 op=UNLOAD Jan 14 06:27:43.702890 kernel: audit: type=1334 audit(1768372063.693:749): prog-id=256 op=UNLOAD Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.705420 kernel: audit: type=1300 audit(1768372063.693:749): arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.710494 kernel: audit: type=1327 audit(1768372063.693:749): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=257 op=LOAD Jan 14 06:27:43.715342 kernel: audit: type=1334 audit(1768372063.693:750): prog-id=257 op=LOAD Jan 14 06:27:43.715423 kernel: audit: type=1300 audit(1768372063.693:750): arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a8488 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.722198 kernel: audit: type=1327 audit(1768372063.693:750): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=258 op=LOAD Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=22 a0=5 a1=c0001a8218 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=258 op=UNLOAD Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=257 op=UNLOAD Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=3 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.693000 audit: BPF prog-id=259 op=LOAD Jan 14 06:27:43.693000 audit[5143]: SYSCALL arch=c000003e syscall=321 success=yes exit=20 a0=5 a1=c0001a86e8 a2=98 a3=0 items=0 ppid=5132 pid=5143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:43.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393366323137616630356666353462376435323565386430326330 Jan 14 06:27:43.770782 containerd[1636]: time="2026-01-14T06:27:43.770668397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-brmbk,Uid:c58c893f-2e4d-4df6-aa40-06b84b7b6bbc,Namespace:calico-system,Attempt:0,} returns sandbox id \"0993f217af05ff54b7d525e8d02c00481ca671790a2cfd0f7d4d0f2bd41d5052\"" Jan 14 06:27:43.777922 containerd[1636]: time="2026-01-14T06:27:43.777842278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:27:44.085250 containerd[1636]: time="2026-01-14T06:27:44.085152466Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:44.087695 containerd[1636]: time="2026-01-14T06:27:44.087647566Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:27:44.088066 containerd[1636]: time="2026-01-14T06:27:44.087791878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:44.088214 kubelet[2961]: E0114 06:27:44.088086 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:27:44.088214 kubelet[2961]: E0114 06:27:44.088188 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:27:44.089861 kubelet[2961]: E0114 06:27:44.088523 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bt9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:44.090329 kubelet[2961]: E0114 06:27:44.090265 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:27:44.818889 kubelet[2961]: E0114 06:27:44.818769 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:27:44.873000 audit[5175]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:44.873000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fffd17a1220 a2=0 a3=7fffd17a120c items=0 ppid=3077 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:44.873000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:44.882000 audit[5175]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5175 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:27:44.882000 audit[5175]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7fffd17a1220 a2=0 a3=7fffd17a120c items=0 ppid=3077 pid=5175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:44.882000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:27:44.924716 systemd-networkd[1557]: cali7c46b7acf86: Gained IPv6LL Jan 14 06:27:46.289417 containerd[1636]: time="2026-01-14T06:27:46.289183377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:27:46.599718 containerd[1636]: time="2026-01-14T06:27:46.599415433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:46.601584 containerd[1636]: time="2026-01-14T06:27:46.601523120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:27:46.601894 containerd[1636]: time="2026-01-14T06:27:46.601702664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:46.603584 kubelet[2961]: E0114 06:27:46.602177 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:46.603584 kubelet[2961]: E0114 06:27:46.602255 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:46.603584 kubelet[2961]: E0114 06:27:46.602492 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkclb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:46.604354 kubelet[2961]: E0114 06:27:46.604270 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:27:47.285820 containerd[1636]: time="2026-01-14T06:27:47.285651678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:27:47.589837 containerd[1636]: time="2026-01-14T06:27:47.589508406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:47.591429 containerd[1636]: time="2026-01-14T06:27:47.591375973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:27:47.591621 containerd[1636]: time="2026-01-14T06:27:47.591553943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:47.592012 kubelet[2961]: E0114 06:27:47.591934 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:27:47.592133 kubelet[2961]: E0114 06:27:47.592037 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:27:47.592581 containerd[1636]: time="2026-01-14T06:27:47.592506288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:27:47.592767 kubelet[2961]: E0114 06:27:47.592500 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:47.593897 kubelet[2961]: E0114 06:27:47.593835 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:27:47.923237 containerd[1636]: time="2026-01-14T06:27:47.922973772Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:47.924830 containerd[1636]: time="2026-01-14T06:27:47.924741768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:27:47.925243 containerd[1636]: time="2026-01-14T06:27:47.924874021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:47.925664 kubelet[2961]: E0114 06:27:47.925545 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:27:47.926200 kubelet[2961]: E0114 06:27:47.925714 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:27:47.926200 kubelet[2961]: E0114 06:27:47.926049 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:47.930548 containerd[1636]: time="2026-01-14T06:27:47.930493082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:27:48.293272 containerd[1636]: time="2026-01-14T06:27:48.293207717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:48.294582 containerd[1636]: time="2026-01-14T06:27:48.294464006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:27:48.294582 containerd[1636]: time="2026-01-14T06:27:48.294538548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:48.294925 kubelet[2961]: E0114 06:27:48.294885 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:27:48.295050 kubelet[2961]: E0114 06:27:48.294947 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:27:48.295523 kubelet[2961]: E0114 06:27:48.295220 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:48.297866 kubelet[2961]: E0114 06:27:48.297741 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:27:48.311838 containerd[1636]: time="2026-01-14T06:27:48.311451373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:27:48.656714 containerd[1636]: time="2026-01-14T06:27:48.656498187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:48.658508 containerd[1636]: time="2026-01-14T06:27:48.658331085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:27:48.658831 containerd[1636]: time="2026-01-14T06:27:48.658611232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:48.659321 kubelet[2961]: E0114 06:27:48.659146 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:27:48.659321 kubelet[2961]: E0114 06:27:48.659262 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:27:48.659652 kubelet[2961]: E0114 06:27:48.659565 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d862720089aa46bd9f413e550c532138,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:48.663072 containerd[1636]: time="2026-01-14T06:27:48.663037721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:27:48.988933 containerd[1636]: time="2026-01-14T06:27:48.988774208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:48.990446 containerd[1636]: time="2026-01-14T06:27:48.990402617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:27:48.990544 containerd[1636]: time="2026-01-14T06:27:48.990516077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:48.990746 kubelet[2961]: E0114 06:27:48.990695 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:27:48.991765 kubelet[2961]: E0114 06:27:48.990762 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:27:48.991765 kubelet[2961]: E0114 06:27:48.990916 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:48.994749 kubelet[2961]: E0114 06:27:48.992513 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:27:49.146168 kernel: kauditd_printk_skb: 18 callbacks suppressed Jan 14 06:27:49.146663 kernel: audit: type=1130 audit(1768372069.133:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.48.98:22-20.161.92.111:45142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:49.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.48.98:22-20.161.92.111:45142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:49.133790 systemd[1]: Started sshd@11-10.230.48.98:22-20.161.92.111:45142.service - OpenSSH per-connection server daemon (20.161.92.111:45142). Jan 14 06:27:49.704000 audit[5182]: USER_ACCT pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.710402 sshd[5182]: Accepted publickey for core from 20.161.92.111 port 45142 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:27:49.715601 kernel: audit: type=1101 audit(1768372069.704:758): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.715681 kernel: audit: type=1103 audit(1768372069.712:759): pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.712000 audit[5182]: CRED_ACQ pid=5182 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.716856 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:27:49.722217 kernel: audit: type=1006 audit(1768372069.713:760): pid=5182 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 06:27:49.713000 audit[5182]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9ce222a0 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:49.731734 kernel: audit: type=1300 audit(1768372069.713:760): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc9ce222a0 a2=3 a3=0 items=0 ppid=1 pid=5182 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:49.732032 kernel: audit: type=1327 audit(1768372069.713:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:27:49.713000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:27:49.742349 systemd-logind[1607]: New session 11 of user core. Jan 14 06:27:49.754157 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 06:27:49.763000 audit[5182]: USER_START pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.770650 kernel: audit: type=1105 audit(1768372069.763:761): pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.772000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:49.778648 kernel: audit: type=1103 audit(1768372069.772:762): pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:50.290631 containerd[1636]: time="2026-01-14T06:27:50.290068870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:27:50.656137 sshd[5186]: Connection closed by 20.161.92.111 port 45142 Jan 14 06:27:50.656001 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Jan 14 06:27:50.667000 audit[5182]: USER_END pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:50.667000 audit[5182]: CRED_DISP pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:50.681773 kernel: audit: type=1106 audit(1768372070.667:763): pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:50.681875 kernel: audit: type=1104 audit(1768372070.667:764): pid=5182 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:50.684614 systemd[1]: sshd@11-10.230.48.98:22-20.161.92.111:45142.service: Deactivated successfully. Jan 14 06:27:50.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.48.98:22-20.161.92.111:45142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:50.689891 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 06:27:50.692219 systemd-logind[1607]: Session 11 logged out. Waiting for processes to exit. Jan 14 06:27:50.693601 containerd[1636]: time="2026-01-14T06:27:50.693207319Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:50.694737 containerd[1636]: time="2026-01-14T06:27:50.694694617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:50.694843 containerd[1636]: time="2026-01-14T06:27:50.694789049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:27:50.695408 kubelet[2961]: E0114 06:27:50.695277 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:50.696280 kubelet[2961]: E0114 06:27:50.696068 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:27:50.697247 kubelet[2961]: E0114 06:27:50.697136 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv9nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:50.697917 systemd-logind[1607]: Removed session 11. Jan 14 06:27:50.698493 kubelet[2961]: E0114 06:27:50.698447 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:27:53.106042 systemd[1]: Started sshd@12-10.230.48.98:22-64.225.73.213:59238.service - OpenSSH per-connection server daemon (64.225.73.213:59238). Jan 14 06:27:53.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.48.98:22-64.225.73.213:59238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:53.223860 sshd[5200]: Invalid user apache from 64.225.73.213 port 59238 Jan 14 06:27:53.240412 sshd[5200]: Connection closed by invalid user apache 64.225.73.213 port 59238 [preauth] Jan 14 06:27:53.240000 audit[5200]: USER_ERR pid=5200 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:27:53.243767 systemd[1]: sshd@12-10.230.48.98:22-64.225.73.213:59238.service: Deactivated successfully. Jan 14 06:27:53.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.48.98:22-64.225.73.213:59238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:55.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.48.98:22-20.161.92.111:55396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:55.782820 systemd[1]: Started sshd@13-10.230.48.98:22-20.161.92.111:55396.service - OpenSSH per-connection server daemon (20.161.92.111:55396). Jan 14 06:27:55.788607 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:27:55.788681 kernel: audit: type=1130 audit(1768372075.782:769): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.48.98:22-20.161.92.111:55396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:56.286539 containerd[1636]: time="2026-01-14T06:27:56.286176425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:27:56.345775 sshd[5215]: Accepted publickey for core from 20.161.92.111 port 55396 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:27:56.344000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.360028 kernel: audit: type=1101 audit(1768372076.344:770): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.360175 kernel: audit: type=1103 audit(1768372076.353:771): pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.353000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.357589 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:27:56.370627 kernel: audit: type=1006 audit(1768372076.353:772): pid=5215 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 14 06:27:56.353000 audit[5215]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe794d8190 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:56.378738 kernel: audit: type=1300 audit(1768372076.353:772): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe794d8190 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:27:56.383748 systemd-logind[1607]: New session 12 of user core. Jan 14 06:27:56.391992 kernel: audit: type=1327 audit(1768372076.353:772): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:27:56.353000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:27:56.392742 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 06:27:56.405000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.412713 kernel: audit: type=1105 audit(1768372076.405:773): pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.412000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.424591 kernel: audit: type=1103 audit(1768372076.412:774): pid=5219 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.620262 containerd[1636]: time="2026-01-14T06:27:56.619871794Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:27:56.622470 containerd[1636]: time="2026-01-14T06:27:56.622390658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:27:56.622611 containerd[1636]: time="2026-01-14T06:27:56.622439921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:27:56.623578 kubelet[2961]: E0114 06:27:56.623483 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:27:56.624218 kubelet[2961]: E0114 06:27:56.624134 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:27:56.627242 kubelet[2961]: E0114 06:27:56.627101 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bt9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:27:56.628440 kubelet[2961]: E0114 06:27:56.628378 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:27:56.920794 sshd[5219]: Connection closed by 20.161.92.111 port 55396 Jan 14 06:27:56.922101 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jan 14 06:27:56.927000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.939610 kernel: audit: type=1106 audit(1768372076.927:775): pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.941497 systemd[1]: sshd@13-10.230.48.98:22-20.161.92.111:55396.service: Deactivated successfully. Jan 14 06:27:56.927000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.953608 kernel: audit: type=1104 audit(1768372076.927:776): pid=5215 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:27:56.953731 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 06:27:56.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.48.98:22-20.161.92.111:55396 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:27:56.960109 systemd-logind[1607]: Session 12 logged out. Waiting for processes to exit. Jan 14 06:27:56.963205 systemd-logind[1607]: Removed session 12. Jan 14 06:27:59.285939 kubelet[2961]: E0114 06:27:59.285840 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:28:02.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.48.98:22-20.161.92.111:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:02.015302 systemd[1]: Started sshd@14-10.230.48.98:22-20.161.92.111:55400.service - OpenSSH per-connection server daemon (20.161.92.111:55400). Jan 14 06:28:02.018458 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:02.018521 kernel: audit: type=1130 audit(1768372082.013:778): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.48.98:22-20.161.92.111:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:02.288237 kubelet[2961]: E0114 06:28:02.287987 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:28:02.564000 audit[5258]: USER_ACCT pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.568268 sshd-session[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:02.569575 sshd[5258]: Accepted publickey for core from 20.161.92.111 port 55400 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:02.571720 kernel: audit: type=1101 audit(1768372082.564:779): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.564000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.578601 kernel: audit: type=1103 audit(1768372082.564:780): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.564000 audit[5258]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2432d350 a2=3 a3=0 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:02.586589 kernel: audit: type=1006 audit(1768372082.564:781): pid=5258 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 06:28:02.587220 kernel: audit: type=1300 audit(1768372082.564:781): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff2432d350 a2=3 a3=0 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:02.587801 systemd-logind[1607]: New session 13 of user core. Jan 14 06:28:02.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:02.591690 kernel: audit: type=1327 audit(1768372082.564:781): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:02.605993 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 06:28:02.611000 audit[5258]: USER_START pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.616000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.620298 kernel: audit: type=1105 audit(1768372082.611:782): pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.620410 kernel: audit: type=1103 audit(1768372082.616:783): pid=5262 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.967384 sshd[5262]: Connection closed by 20.161.92.111 port 55400 Jan 14 06:28:02.966281 sshd-session[5258]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:02.968000 audit[5258]: USER_END pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.974767 systemd[1]: sshd@14-10.230.48.98:22-20.161.92.111:55400.service: Deactivated successfully. Jan 14 06:28:02.968000 audit[5258]: CRED_DISP pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.978627 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 06:28:02.979004 kernel: audit: type=1106 audit(1768372082.968:784): pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.979169 kernel: audit: type=1104 audit(1768372082.968:785): pid=5258 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:02.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.48.98:22-20.161.92.111:55400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:02.983713 systemd-logind[1607]: Session 13 logged out. Waiting for processes to exit. Jan 14 06:28:02.986188 systemd-logind[1607]: Removed session 13. Jan 14 06:28:03.286210 kubelet[2961]: E0114 06:28:03.286148 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:28:04.286776 kubelet[2961]: E0114 06:28:04.286426 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:28:04.286776 kubelet[2961]: E0114 06:28:04.286630 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:28:08.073146 systemd[1]: Started sshd@15-10.230.48.98:22-20.161.92.111:48060.service - OpenSSH per-connection server daemon (20.161.92.111:48060). Jan 14 06:28:08.080296 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:08.080386 kernel: audit: type=1130 audit(1768372088.071:787): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.48.98:22-20.161.92.111:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:08.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.48.98:22-20.161.92.111:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:08.595000 audit[5280]: USER_ACCT pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.597548 sshd[5280]: Accepted publickey for core from 20.161.92.111 port 48060 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:08.601171 sshd-session[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:08.597000 audit[5280]: CRED_ACQ pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.604002 kernel: audit: type=1101 audit(1768372088.595:788): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.604110 kernel: audit: type=1103 audit(1768372088.597:789): pid=5280 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.608145 kernel: audit: type=1006 audit(1768372088.597:790): pid=5280 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 06:28:08.597000 audit[5280]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd31d72c30 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:08.597000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:08.617319 kernel: audit: type=1300 audit(1768372088.597:790): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd31d72c30 a2=3 a3=0 items=0 ppid=1 pid=5280 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:08.617628 kernel: audit: type=1327 audit(1768372088.597:790): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:08.622588 systemd-logind[1607]: New session 14 of user core. Jan 14 06:28:08.632848 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 06:28:08.637000 audit[5280]: USER_START pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.640000 audit[5284]: CRED_ACQ pid=5284 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.646386 kernel: audit: type=1105 audit(1768372088.637:791): pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.646644 kernel: audit: type=1103 audit(1768372088.640:792): pid=5284 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.981545 sshd[5284]: Connection closed by 20.161.92.111 port 48060 Jan 14 06:28:08.983351 sshd-session[5280]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:08.985000 audit[5280]: USER_END pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.995673 kernel: audit: type=1106 audit(1768372088.985:793): pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.996780 systemd[1]: sshd@15-10.230.48.98:22-20.161.92.111:48060.service: Deactivated successfully. Jan 14 06:28:09.003138 kernel: audit: type=1104 audit(1768372088.985:794): pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.985000 audit[5280]: CRED_DISP pid=5280 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:08.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.48.98:22-20.161.92.111:48060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:09.002015 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 06:28:09.005273 systemd-logind[1607]: Session 14 logged out. Waiting for processes to exit. Jan 14 06:28:09.006784 systemd-logind[1607]: Removed session 14. Jan 14 06:28:10.286685 kubelet[2961]: E0114 06:28:10.285615 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:28:12.285983 containerd[1636]: time="2026-01-14T06:28:12.285918435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:28:12.592592 containerd[1636]: time="2026-01-14T06:28:12.592387256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:12.593484 containerd[1636]: time="2026-01-14T06:28:12.593425500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:28:12.593547 containerd[1636]: time="2026-01-14T06:28:12.593528436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:12.594016 kubelet[2961]: E0114 06:28:12.593859 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:28:12.594455 kubelet[2961]: E0114 06:28:12.594083 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:28:12.594525 kubelet[2961]: E0114 06:28:12.594407 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkclb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:12.596171 kubelet[2961]: E0114 06:28:12.596113 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:28:13.287937 containerd[1636]: time="2026-01-14T06:28:13.287856596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:28:13.617512 containerd[1636]: time="2026-01-14T06:28:13.617342135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:13.618585 containerd[1636]: time="2026-01-14T06:28:13.618515367Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:28:13.618716 containerd[1636]: time="2026-01-14T06:28:13.618670854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:13.618940 kubelet[2961]: E0114 06:28:13.618878 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:28:13.619403 kubelet[2961]: E0114 06:28:13.618958 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:28:13.619403 kubelet[2961]: E0114 06:28:13.619154 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:13.620582 kubelet[2961]: E0114 06:28:13.620499 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:28:14.097986 systemd[1]: Started sshd@16-10.230.48.98:22-20.161.92.111:49990.service - OpenSSH per-connection server daemon (20.161.92.111:49990). Jan 14 06:28:14.109061 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:14.109153 kernel: audit: type=1130 audit(1768372094.096:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.48.98:22-20.161.92.111:49990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:14.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.48.98:22-20.161.92.111:49990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:14.627000 audit[5303]: USER_ACCT pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.630964 sshd[5303]: Accepted publickey for core from 20.161.92.111 port 49990 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:14.632000 audit[5303]: CRED_ACQ pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.635414 sshd-session[5303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:14.636601 kernel: audit: type=1101 audit(1768372094.627:797): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.636686 kernel: audit: type=1103 audit(1768372094.632:798): pid=5303 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.641518 kernel: audit: type=1006 audit(1768372094.632:799): pid=5303 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 06:28:14.632000 audit[5303]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4560f2f0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:14.632000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:14.649819 kernel: audit: type=1300 audit(1768372094.632:799): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd4560f2f0 a2=3 a3=0 items=0 ppid=1 pid=5303 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:14.649896 kernel: audit: type=1327 audit(1768372094.632:799): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:14.651420 systemd-logind[1607]: New session 15 of user core. Jan 14 06:28:14.662798 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 06:28:14.667000 audit[5303]: USER_START pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.670000 audit[5308]: CRED_ACQ pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.676815 kernel: audit: type=1105 audit(1768372094.667:800): pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:14.676892 kernel: audit: type=1103 audit(1768372094.670:801): pid=5308 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.007915 sshd[5308]: Connection closed by 20.161.92.111 port 49990 Jan 14 06:28:15.009985 sshd-session[5303]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:15.010000 audit[5303]: USER_END pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.023992 kernel: audit: type=1106 audit(1768372095.010:802): pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.015000 audit[5303]: CRED_DISP pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.024498 systemd[1]: sshd@16-10.230.48.98:22-20.161.92.111:49990.service: Deactivated successfully. Jan 14 06:28:15.029680 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 06:28:15.030734 kernel: audit: type=1104 audit(1768372095.015:803): pid=5303 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.48.98:22-20.161.92.111:49990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:15.033072 systemd-logind[1607]: Session 15 logged out. Waiting for processes to exit. Jan 14 06:28:15.034790 systemd-logind[1607]: Removed session 15. Jan 14 06:28:15.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.48.98:22-20.161.92.111:49998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:15.109744 systemd[1]: Started sshd@17-10.230.48.98:22-20.161.92.111:49998.service - OpenSSH per-connection server daemon (20.161.92.111:49998). Jan 14 06:28:15.620000 audit[5320]: USER_ACCT pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.623137 sshd[5320]: Accepted publickey for core from 20.161.92.111 port 49998 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:15.622000 audit[5320]: CRED_ACQ pid=5320 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.623000 audit[5320]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffe87021190 a2=3 a3=0 items=0 ppid=1 pid=5320 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:15.623000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:15.626118 sshd-session[5320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:15.635681 systemd-logind[1607]: New session 16 of user core. Jan 14 06:28:15.641920 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 06:28:15.646000 audit[5320]: USER_START pid=5320 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:15.649000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.113521 sshd[5324]: Connection closed by 20.161.92.111 port 49998 Jan 14 06:28:16.114150 sshd-session[5320]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:16.118000 audit[5320]: USER_END pid=5320 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.118000 audit[5320]: CRED_DISP pid=5320 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.123084 systemd[1]: sshd@17-10.230.48.98:22-20.161.92.111:49998.service: Deactivated successfully. Jan 14 06:28:16.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.48.98:22-20.161.92.111:49998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:16.126239 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 06:28:16.130345 systemd-logind[1607]: Session 16 logged out. Waiting for processes to exit. Jan 14 06:28:16.132045 systemd-logind[1607]: Removed session 16. Jan 14 06:28:16.218290 systemd[1]: Started sshd@18-10.230.48.98:22-20.161.92.111:50004.service - OpenSSH per-connection server daemon (20.161.92.111:50004). Jan 14 06:28:16.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.48.98:22-20.161.92.111:50004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:16.289442 containerd[1636]: time="2026-01-14T06:28:16.288693392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:28:16.615950 containerd[1636]: time="2026-01-14T06:28:16.615883559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:16.617975 containerd[1636]: time="2026-01-14T06:28:16.617892450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:28:16.617975 containerd[1636]: time="2026-01-14T06:28:16.617926087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:16.618245 kubelet[2961]: E0114 06:28:16.618168 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:28:16.618877 kubelet[2961]: E0114 06:28:16.618268 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:28:16.619398 containerd[1636]: time="2026-01-14T06:28:16.619140929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:28:16.619497 kubelet[2961]: E0114 06:28:16.619243 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d862720089aa46bd9f413e550c532138,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:16.770000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.772446 sshd[5334]: Accepted publickey for core from 20.161.92.111 port 50004 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:16.771000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.772000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffd516a2490 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:16.772000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:16.775041 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:16.784834 systemd-logind[1607]: New session 17 of user core. Jan 14 06:28:16.790865 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 06:28:16.795000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.798000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:16.932634 containerd[1636]: time="2026-01-14T06:28:16.932215513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:16.933960 containerd[1636]: time="2026-01-14T06:28:16.933833432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:28:16.933960 containerd[1636]: time="2026-01-14T06:28:16.933903400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:16.934278 kubelet[2961]: E0114 06:28:16.934171 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:28:16.934532 kubelet[2961]: E0114 06:28:16.934295 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:28:16.934881 containerd[1636]: time="2026-01-14T06:28:16.934847904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:28:16.935708 kubelet[2961]: E0114 06:28:16.935324 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:17.173520 sshd[5341]: Connection closed by 20.161.92.111 port 50004 Jan 14 06:28:17.174984 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:17.177000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:17.177000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:17.181849 systemd[1]: sshd@18-10.230.48.98:22-20.161.92.111:50004.service: Deactivated successfully. Jan 14 06:28:17.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.48.98:22-20.161.92.111:50004 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:17.186221 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 06:28:17.190686 systemd-logind[1607]: Session 17 logged out. Waiting for processes to exit. Jan 14 06:28:17.192282 systemd-logind[1607]: Removed session 17. Jan 14 06:28:17.263790 containerd[1636]: time="2026-01-14T06:28:17.263712981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:17.265164 containerd[1636]: time="2026-01-14T06:28:17.265034554Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:28:17.265240 containerd[1636]: time="2026-01-14T06:28:17.265097485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:17.265444 kubelet[2961]: E0114 06:28:17.265392 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:28:17.265733 kubelet[2961]: E0114 06:28:17.265460 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:28:17.266133 containerd[1636]: time="2026-01-14T06:28:17.266093325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:28:17.266643 kubelet[2961]: E0114 06:28:17.266270 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:17.268794 kubelet[2961]: E0114 06:28:17.268734 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:28:17.581451 containerd[1636]: time="2026-01-14T06:28:17.581145284Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:17.582459 containerd[1636]: time="2026-01-14T06:28:17.582207755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:28:17.582459 containerd[1636]: time="2026-01-14T06:28:17.582300647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:17.582920 kubelet[2961]: E0114 06:28:17.582819 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:28:17.582920 kubelet[2961]: E0114 06:28:17.582899 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:28:17.583138 kubelet[2961]: E0114 06:28:17.583073 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:17.584555 kubelet[2961]: E0114 06:28:17.584497 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:28:19.285682 containerd[1636]: time="2026-01-14T06:28:19.285344006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:28:19.615429 containerd[1636]: time="2026-01-14T06:28:19.615061600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:19.616627 containerd[1636]: time="2026-01-14T06:28:19.616591334Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:28:19.616909 containerd[1636]: time="2026-01-14T06:28:19.616674768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:19.617594 kubelet[2961]: E0114 06:28:19.617169 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:28:19.617594 kubelet[2961]: E0114 06:28:19.617243 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:28:19.617594 kubelet[2961]: E0114 06:28:19.617470 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv9nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:19.618675 kubelet[2961]: E0114 06:28:19.618601 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:28:22.286980 systemd[1]: Started sshd@19-10.230.48.98:22-20.161.92.111:50018.service - OpenSSH per-connection server daemon (20.161.92.111:50018). Jan 14 06:28:22.292901 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 06:28:22.293018 kernel: audit: type=1130 audit(1768372102.286:823): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.48.98:22-20.161.92.111:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:22.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.48.98:22-20.161.92.111:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:22.830000 audit[5354]: USER_ACCT pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.834965 sshd[5354]: Accepted publickey for core from 20.161.92.111 port 50018 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:22.838000 audit[5354]: CRED_ACQ pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.841954 kernel: audit: type=1101 audit(1768372102.830:824): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.842444 kernel: audit: type=1103 audit(1768372102.838:825): pid=5354 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.842110 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:22.838000 audit[5354]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd5c0860 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:22.851597 kernel: audit: type=1006 audit(1768372102.838:826): pid=5354 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 06:28:22.851701 kernel: audit: type=1300 audit(1768372102.838:826): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffddd5c0860 a2=3 a3=0 items=0 ppid=1 pid=5354 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:22.838000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:22.860610 kernel: audit: type=1327 audit(1768372102.838:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:22.864077 systemd-logind[1607]: New session 18 of user core. Jan 14 06:28:22.870812 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 06:28:22.876000 audit[5354]: USER_START pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.883597 kernel: audit: type=1105 audit(1768372102.876:827): pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.881000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:22.890614 kernel: audit: type=1103 audit(1768372102.881:828): pid=5358 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:23.287134 containerd[1636]: time="2026-01-14T06:28:23.287065636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:28:23.290187 sshd[5358]: Connection closed by 20.161.92.111 port 50018 Jan 14 06:28:23.290095 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:23.295000 audit[5354]: USER_END pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:23.306582 kernel: audit: type=1106 audit(1768372103.295:829): pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:23.306684 kernel: audit: type=1104 audit(1768372103.295:830): pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:23.295000 audit[5354]: CRED_DISP pid=5354 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:23.313845 systemd-logind[1607]: Session 18 logged out. Waiting for processes to exit. Jan 14 06:28:23.315386 systemd[1]: sshd@19-10.230.48.98:22-20.161.92.111:50018.service: Deactivated successfully. Jan 14 06:28:23.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.48.98:22-20.161.92.111:50018 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:23.321353 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 06:28:23.328249 systemd-logind[1607]: Removed session 18. Jan 14 06:28:23.620335 containerd[1636]: time="2026-01-14T06:28:23.620146934Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:23.621834 containerd[1636]: time="2026-01-14T06:28:23.621760272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:28:23.621967 containerd[1636]: time="2026-01-14T06:28:23.621921183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:23.622242 kubelet[2961]: E0114 06:28:23.622173 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:28:23.623053 kubelet[2961]: E0114 06:28:23.622252 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:28:23.623422 kubelet[2961]: E0114 06:28:23.622520 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bt9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:23.624951 kubelet[2961]: E0114 06:28:23.624874 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:28:24.292367 kubelet[2961]: E0114 06:28:24.292183 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:28:27.286151 kubelet[2961]: E0114 06:28:27.285727 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:28:28.286712 kubelet[2961]: E0114 06:28:28.286624 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:28:28.392554 systemd[1]: Started sshd@20-10.230.48.98:22-20.161.92.111:36208.service - OpenSSH per-connection server daemon (20.161.92.111:36208). Jan 14 06:28:28.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.48.98:22-20.161.92.111:36208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:28.394680 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:28.394732 kernel: audit: type=1130 audit(1768372108.391:832): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.48.98:22-20.161.92.111:36208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:28.912000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.919753 kernel: audit: type=1101 audit(1768372108.912:833): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.916957 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:28.920265 sshd[5372]: Accepted publickey for core from 20.161.92.111 port 36208 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:28.914000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.926588 kernel: audit: type=1103 audit(1768372108.914:834): pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.933624 kernel: audit: type=1006 audit(1768372108.914:835): pid=5372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 06:28:28.936957 systemd-logind[1607]: New session 19 of user core. Jan 14 06:28:28.914000 audit[5372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0fd44d20 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:28.914000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:28.943964 kernel: audit: type=1300 audit(1768372108.914:835): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0fd44d20 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:28.944040 kernel: audit: type=1327 audit(1768372108.914:835): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:28.946854 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 06:28:28.951000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.956000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.962059 kernel: audit: type=1105 audit(1768372108.951:836): pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:28.962117 kernel: audit: type=1103 audit(1768372108.956:837): pid=5376 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:29.291962 sshd[5376]: Connection closed by 20.161.92.111 port 36208 Jan 14 06:28:29.291818 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:29.295000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:29.303606 kernel: audit: type=1106 audit(1768372109.295:838): pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:29.305775 systemd[1]: sshd@20-10.230.48.98:22-20.161.92.111:36208.service: Deactivated successfully. Jan 14 06:28:29.295000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:29.311297 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 06:28:29.311579 kernel: audit: type=1104 audit(1768372109.295:839): pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:29.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.48.98:22-20.161.92.111:36208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:29.315695 systemd-logind[1607]: Session 19 logged out. Waiting for processes to exit. Jan 14 06:28:29.317338 systemd-logind[1607]: Removed session 19. Jan 14 06:28:29.509742 systemd[1]: Started sshd@21-10.230.48.98:22-64.225.73.213:37846.service - OpenSSH per-connection server daemon (64.225.73.213:37846). Jan 14 06:28:29.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.48.98:22-64.225.73.213:37846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:29.744472 sshd[5387]: Invalid user postgres from 64.225.73.213 port 37846 Jan 14 06:28:29.772694 sshd[5387]: Connection closed by invalid user postgres 64.225.73.213 port 37846 [preauth] Jan 14 06:28:29.771000 audit[5387]: USER_ERR pid=5387 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:28:29.775869 systemd[1]: sshd@21-10.230.48.98:22-64.225.73.213:37846.service: Deactivated successfully. Jan 14 06:28:29.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.48.98:22-64.225.73.213:37846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:33.286454 kubelet[2961]: E0114 06:28:33.285494 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:28:34.285016 kubelet[2961]: E0114 06:28:34.284578 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:28:34.395027 systemd[1]: Started sshd@22-10.230.48.98:22-20.161.92.111:58592.service - OpenSSH per-connection server daemon (20.161.92.111:58592). Jan 14 06:28:34.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.48.98:22-20.161.92.111:58592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:34.399757 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:28:34.399884 kernel: audit: type=1130 audit(1768372114.395:844): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.48.98:22-20.161.92.111:58592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:34.911000 audit[5424]: USER_ACCT pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.920785 kernel: audit: type=1101 audit(1768372114.911:845): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.920870 sshd[5424]: Accepted publickey for core from 20.161.92.111 port 58592 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:34.921080 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:34.916000 audit[5424]: CRED_ACQ pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.929613 kernel: audit: type=1103 audit(1768372114.916:846): pid=5424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.929704 kernel: audit: type=1006 audit(1768372114.916:847): pid=5424 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 06:28:34.916000 audit[5424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb61c4860 a2=3 a3=0 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:34.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:34.939579 kernel: audit: type=1300 audit(1768372114.916:847): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffcb61c4860 a2=3 a3=0 items=0 ppid=1 pid=5424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:34.939645 kernel: audit: type=1327 audit(1768372114.916:847): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:34.938145 systemd-logind[1607]: New session 20 of user core. Jan 14 06:28:34.955979 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 06:28:34.962000 audit[5424]: USER_START pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.969668 kernel: audit: type=1105 audit(1768372114.962:848): pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.969000 audit[5428]: CRED_ACQ pid=5428 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:34.975664 kernel: audit: type=1103 audit(1768372114.969:849): pid=5428 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:35.286950 sshd[5428]: Connection closed by 20.161.92.111 port 58592 Jan 14 06:28:35.287428 kubelet[2961]: E0114 06:28:35.287053 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:28:35.287428 kubelet[2961]: E0114 06:28:35.287002 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:28:35.288340 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:35.292000 audit[5424]: USER_END pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:35.297197 systemd[1]: sshd@22-10.230.48.98:22-20.161.92.111:58592.service: Deactivated successfully. Jan 14 06:28:35.300602 kernel: audit: type=1106 audit(1768372115.292:850): pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:35.301865 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 06:28:35.292000 audit[5424]: CRED_DISP pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:35.312756 kernel: audit: type=1104 audit(1768372115.292:851): pid=5424 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:35.304442 systemd-logind[1607]: Session 20 logged out. Waiting for processes to exit. Jan 14 06:28:35.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.48.98:22-20.161.92.111:58592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:35.316044 systemd-logind[1607]: Removed session 20. Jan 14 06:28:39.285080 kubelet[2961]: E0114 06:28:39.284808 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:28:40.402518 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:40.402738 kernel: audit: type=1130 audit(1768372120.396:853): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.48.98:22-20.161.92.111:58608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:40.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.48.98:22-20.161.92.111:58608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:40.396398 systemd[1]: Started sshd@23-10.230.48.98:22-20.161.92.111:58608.service - OpenSSH per-connection server daemon (20.161.92.111:58608). Jan 14 06:28:40.910000 audit[5441]: USER_ACCT pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.916275 sshd[5441]: Accepted publickey for core from 20.161.92.111 port 58608 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:40.916801 kernel: audit: type=1101 audit(1768372120.910:854): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.916000 audit[5441]: CRED_ACQ pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.921966 sshd-session[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:40.925594 kernel: audit: type=1103 audit(1768372120.916:855): pid=5441 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.916000 audit[5441]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3201cf0 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:40.931193 kernel: audit: type=1006 audit(1768372120.916:856): pid=5441 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 06:28:40.931287 kernel: audit: type=1300 audit(1768372120.916:856): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffdb3201cf0 a2=3 a3=0 items=0 ppid=1 pid=5441 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:40.916000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:40.937591 kernel: audit: type=1327 audit(1768372120.916:856): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:40.944221 systemd-logind[1607]: New session 21 of user core. Jan 14 06:28:40.955900 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 06:28:40.960000 audit[5441]: USER_START pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.965000 audit[5445]: CRED_ACQ pid=5445 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.969752 kernel: audit: type=1105 audit(1768372120.960:857): pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:40.969854 kernel: audit: type=1103 audit(1768372120.965:858): pid=5445 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:41.283790 sshd[5445]: Connection closed by 20.161.92.111 port 58608 Jan 14 06:28:41.284787 sshd-session[5441]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:41.285000 audit[5441]: USER_END pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:41.293585 kernel: audit: type=1106 audit(1768372121.285:859): pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:41.293644 systemd[1]: sshd@23-10.230.48.98:22-20.161.92.111:58608.service: Deactivated successfully. Jan 14 06:28:41.300491 kernel: audit: type=1104 audit(1768372121.285:860): pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:41.285000 audit[5441]: CRED_DISP pid=5441 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:41.298110 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 06:28:41.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.48.98:22-20.161.92.111:58608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:41.303228 systemd-logind[1607]: Session 21 logged out. Waiting for processes to exit. Jan 14 06:28:41.304944 systemd-logind[1607]: Removed session 21. Jan 14 06:28:43.287488 kubelet[2961]: E0114 06:28:43.287279 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:28:44.288430 kubelet[2961]: E0114 06:28:44.288359 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:28:46.400747 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:46.400920 kernel: audit: type=1130 audit(1768372126.394:862): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.48.98:22-20.161.92.111:50902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:46.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.48.98:22-20.161.92.111:50902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:46.396375 systemd[1]: Started sshd@24-10.230.48.98:22-20.161.92.111:50902.service - OpenSSH per-connection server daemon (20.161.92.111:50902). Jan 14 06:28:46.905000 audit[5457]: USER_ACCT pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.909918 sshd[5457]: Accepted publickey for core from 20.161.92.111 port 50902 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:46.911778 sshd-session[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:46.908000 audit[5457]: CRED_ACQ pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.914829 kernel: audit: type=1101 audit(1768372126.905:863): pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.914913 kernel: audit: type=1103 audit(1768372126.908:864): pid=5457 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.919586 kernel: audit: type=1006 audit(1768372126.908:865): pid=5457 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 06:28:46.923810 kernel: audit: type=1300 audit(1768372126.908:865): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca8ce6760 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:46.908000 audit[5457]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffca8ce6760 a2=3 a3=0 items=0 ppid=1 pid=5457 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:46.922712 systemd-logind[1607]: New session 22 of user core. Jan 14 06:28:46.908000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:46.927804 kernel: audit: type=1327 audit(1768372126.908:865): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:46.930809 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 06:28:46.935000 audit[5457]: USER_START pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.943603 kernel: audit: type=1105 audit(1768372126.935:866): pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.942000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:46.948661 kernel: audit: type=1103 audit(1768372126.942:867): pid=5461 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:47.282860 sshd[5461]: Connection closed by 20.161.92.111 port 50902 Jan 14 06:28:47.283590 sshd-session[5457]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:47.285694 kubelet[2961]: E0114 06:28:47.285478 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:28:47.287000 audit[5457]: USER_END pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:47.295597 kernel: audit: type=1106 audit(1768372127.287:868): pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:47.295748 systemd[1]: sshd@24-10.230.48.98:22-20.161.92.111:50902.service: Deactivated successfully. Jan 14 06:28:47.289000 audit[5457]: CRED_DISP pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:47.300352 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 06:28:47.302870 kernel: audit: type=1104 audit(1768372127.289:869): pid=5457 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:47.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.48.98:22-20.161.92.111:50902 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:47.304085 systemd-logind[1607]: Session 22 logged out. Waiting for processes to exit. Jan 14 06:28:47.308834 systemd-logind[1607]: Removed session 22. Jan 14 06:28:48.286274 kubelet[2961]: E0114 06:28:48.286195 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:28:49.284499 kubelet[2961]: E0114 06:28:49.284239 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:28:52.285410 kubelet[2961]: E0114 06:28:52.285221 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:28:52.390439 systemd[1]: Started sshd@25-10.230.48.98:22-20.161.92.111:50910.service - OpenSSH per-connection server daemon (20.161.92.111:50910). Jan 14 06:28:52.398487 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:28:52.398748 kernel: audit: type=1130 audit(1768372132.389:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.48.98:22-20.161.92.111:50910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:52.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.48.98:22-20.161.92.111:50910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:52.894000 audit[5473]: USER_ACCT pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.896876 sshd[5473]: Accepted publickey for core from 20.161.92.111 port 50910 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:52.897000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.901228 sshd-session[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:52.908267 kernel: audit: type=1101 audit(1768372132.894:872): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.908381 kernel: audit: type=1103 audit(1768372132.897:873): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.897000 audit[5473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61102960 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:52.917115 kernel: audit: type=1006 audit(1768372132.897:874): pid=5473 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 14 06:28:52.917196 kernel: audit: type=1300 audit(1768372132.897:874): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc61102960 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:52.897000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:52.920563 systemd-logind[1607]: New session 23 of user core. Jan 14 06:28:52.922680 kernel: audit: type=1327 audit(1768372132.897:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:52.941277 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 06:28:52.947000 audit[5473]: USER_START pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.955618 kernel: audit: type=1105 audit(1768372132.947:875): pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.956000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:52.963636 kernel: audit: type=1103 audit(1768372132.956:876): pid=5477 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.303181 sshd[5477]: Connection closed by 20.161.92.111 port 50910 Jan 14 06:28:53.305733 sshd-session[5473]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:53.306000 audit[5473]: USER_END pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.317633 kernel: audit: type=1106 audit(1768372133.306:877): pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.318613 systemd[1]: sshd@25-10.230.48.98:22-20.161.92.111:50910.service: Deactivated successfully. Jan 14 06:28:53.310000 audit[5473]: CRED_DISP pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.324291 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 06:28:53.328283 systemd-logind[1607]: Session 23 logged out. Waiting for processes to exit. Jan 14 06:28:53.328614 kernel: audit: type=1104 audit(1768372133.310:878): pid=5473 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.230.48.98:22-20.161.92.111:50910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:53.331068 systemd-logind[1607]: Removed session 23. Jan 14 06:28:53.406861 systemd[1]: Started sshd@26-10.230.48.98:22-20.161.92.111:39012.service - OpenSSH per-connection server daemon (20.161.92.111:39012). Jan 14 06:28:53.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.48.98:22-20.161.92.111:39012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:53.911000 audit[5489]: USER_ACCT pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.913027 sshd[5489]: Accepted publickey for core from 20.161.92.111 port 39012 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:53.912000 audit[5489]: CRED_ACQ pid=5489 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.912000 audit[5489]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff1eebe860 a2=3 a3=0 items=0 ppid=1 pid=5489 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:53.912000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:53.915783 sshd-session[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:53.923828 systemd-logind[1607]: New session 24 of user core. Jan 14 06:28:53.931792 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 06:28:53.935000 audit[5489]: USER_START pid=5489 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:53.937000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:54.639858 sshd[5493]: Connection closed by 20.161.92.111 port 39012 Jan 14 06:28:54.645430 sshd-session[5489]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:54.649000 audit[5489]: USER_END pid=5489 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:54.649000 audit[5489]: CRED_DISP pid=5489 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:54.656614 systemd[1]: sshd@26-10.230.48.98:22-20.161.92.111:39012.service: Deactivated successfully. Jan 14 06:28:54.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.230.48.98:22-20.161.92.111:39012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:54.660960 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 06:28:54.667241 systemd-logind[1607]: Session 24 logged out. Waiting for processes to exit. Jan 14 06:28:54.670324 systemd-logind[1607]: Removed session 24. Jan 14 06:28:54.746948 systemd[1]: Started sshd@27-10.230.48.98:22-20.161.92.111:39014.service - OpenSSH per-connection server daemon (20.161.92.111:39014). Jan 14 06:28:54.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.48.98:22-20.161.92.111:39014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:55.279000 audit[5509]: USER_ACCT pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:55.281704 sshd[5509]: Accepted publickey for core from 20.161.92.111 port 39014 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:55.280000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:55.280000 audit[5509]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc0fabce40 a2=3 a3=0 items=0 ppid=1 pid=5509 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:55.280000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:55.285403 kubelet[2961]: E0114 06:28:55.285350 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:28:55.286490 kubelet[2961]: E0114 06:28:55.286372 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:28:55.286667 sshd-session[5509]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:55.296689 systemd-logind[1607]: New session 25 of user core. Jan 14 06:28:55.303909 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 06:28:55.309000 audit[5509]: USER_START pid=5509 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:55.312000 audit[5513]: CRED_ACQ pid=5513 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:56.354000 audit[5523]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:28:56.354000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffeab0afac0 a2=0 a3=7ffeab0afaac items=0 ppid=3077 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:56.354000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:28:56.366000 audit[5523]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5523 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:28:56.366000 audit[5523]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffeab0afac0 a2=0 a3=0 items=0 ppid=3077 pid=5523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:56.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:28:56.414000 audit[5525]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:28:56.414000 audit[5525]: SYSCALL arch=c000003e syscall=46 success=yes exit=14176 a0=3 a1=7ffc3c3889d0 a2=0 a3=7ffc3c3889bc items=0 ppid=3077 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:56.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:28:56.416000 audit[5525]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5525 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:28:56.416000 audit[5525]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc3c3889d0 a2=0 a3=0 items=0 ppid=3077 pid=5525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:56.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:28:56.456998 sshd[5513]: Connection closed by 20.161.92.111 port 39014 Jan 14 06:28:56.458225 sshd-session[5509]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:56.462000 audit[5509]: USER_END pid=5509 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:56.462000 audit[5509]: CRED_DISP pid=5509 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:56.471960 systemd-logind[1607]: Session 25 logged out. Waiting for processes to exit. Jan 14 06:28:56.472248 systemd[1]: sshd@27-10.230.48.98:22-20.161.92.111:39014.service: Deactivated successfully. Jan 14 06:28:56.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.230.48.98:22-20.161.92.111:39014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:56.475759 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 06:28:56.479810 systemd-logind[1607]: Removed session 25. Jan 14 06:28:56.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.48.98:22-20.161.92.111:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:56.562088 systemd[1]: Started sshd@28-10.230.48.98:22-20.161.92.111:39016.service - OpenSSH per-connection server daemon (20.161.92.111:39016). Jan 14 06:28:57.109000 audit[5530]: USER_ACCT pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.111003 sshd[5530]: Accepted publickey for core from 20.161.92.111 port 39016 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:57.111000 audit[5530]: CRED_ACQ pid=5530 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.111000 audit[5530]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fffe3619ff0 a2=3 a3=0 items=0 ppid=1 pid=5530 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:57.111000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:57.114070 sshd-session[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:57.123698 systemd-logind[1607]: New session 26 of user core. Jan 14 06:28:57.133907 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 06:28:57.138000 audit[5530]: USER_START pid=5530 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.141000 audit[5534]: CRED_ACQ pid=5534 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.719612 sshd[5534]: Connection closed by 20.161.92.111 port 39016 Jan 14 06:28:57.720533 sshd-session[5530]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:57.722000 audit[5530]: USER_END pid=5530 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.728721 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 06:28:57.728853 kernel: audit: type=1106 audit(1768372137.722:908): pid=5530 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.728000 audit[5530]: CRED_DISP pid=5530 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.736033 systemd[1]: sshd@28-10.230.48.98:22-20.161.92.111:39016.service: Deactivated successfully. Jan 14 06:28:57.740799 kernel: audit: type=1104 audit(1768372137.728:909): pid=5530 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:57.740776 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 06:28:57.742184 systemd-logind[1607]: Session 26 logged out. Waiting for processes to exit. Jan 14 06:28:57.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.48.98:22-20.161.92.111:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:57.749580 kernel: audit: type=1131 audit(1768372137.735:910): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.230.48.98:22-20.161.92.111:39016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:57.749925 systemd-logind[1607]: Removed session 26. Jan 14 06:28:57.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.48.98:22-20.161.92.111:39020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:57.822491 systemd[1]: Started sshd@29-10.230.48.98:22-20.161.92.111:39020.service - OpenSSH per-connection server daemon (20.161.92.111:39020). Jan 14 06:28:57.828332 kernel: audit: type=1130 audit(1768372137.821:911): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.48.98:22-20.161.92.111:39020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:58.358000 audit[5545]: USER_ACCT pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.368740 kernel: audit: type=1101 audit(1768372138.358:912): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.370431 sshd[5545]: Accepted publickey for core from 20.161.92.111 port 39020 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:28:58.375362 sshd-session[5545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:28:58.371000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.384130 kernel: audit: type=1103 audit(1768372138.371:913): pid=5545 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.390650 kernel: audit: type=1006 audit(1768372138.371:914): pid=5545 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 14 06:28:58.371000 audit[5545]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7c5928c0 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:58.400912 kernel: audit: type=1300 audit(1768372138.371:914): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc7c5928c0 a2=3 a3=0 items=0 ppid=1 pid=5545 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:28:58.401500 systemd-logind[1607]: New session 27 of user core. Jan 14 06:28:58.371000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:58.408924 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 06:28:58.409603 kernel: audit: type=1327 audit(1768372138.371:914): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:28:58.415000 audit[5545]: USER_START pid=5545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.424026 kernel: audit: type=1105 audit(1768372138.415:915): pid=5545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.426000 audit[5549]: CRED_ACQ pid=5549 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.756907 sshd[5549]: Connection closed by 20.161.92.111 port 39020 Jan 14 06:28:58.758250 sshd-session[5545]: pam_unix(sshd:session): session closed for user core Jan 14 06:28:58.761000 audit[5545]: USER_END pid=5545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.762000 audit[5545]: CRED_DISP pid=5545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:28:58.767180 systemd-logind[1607]: Session 27 logged out. Waiting for processes to exit. Jan 14 06:28:58.768165 systemd[1]: sshd@29-10.230.48.98:22-20.161.92.111:39020.service: Deactivated successfully. Jan 14 06:28:58.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.230.48.98:22-20.161.92.111:39020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:28:58.772705 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 06:28:58.776267 systemd-logind[1607]: Removed session 27. Jan 14 06:28:59.302192 containerd[1636]: time="2026-01-14T06:28:59.287681268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 06:28:59.631554 containerd[1636]: time="2026-01-14T06:28:59.631197653Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:28:59.632771 containerd[1636]: time="2026-01-14T06:28:59.632646188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 06:28:59.633636 containerd[1636]: time="2026-01-14T06:28:59.632749939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 06:28:59.633726 kubelet[2961]: E0114 06:28:59.633012 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:28:59.633726 kubelet[2961]: E0114 06:28:59.633110 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 06:28:59.633726 kubelet[2961]: E0114 06:28:59.633391 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t62h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-549d4b77bd-jwpts_calico-system(e59a89b9-4020-44eb-8f82-b847f03cedae): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 06:28:59.634916 kubelet[2961]: E0114 06:28:59.634820 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:29:02.285830 containerd[1636]: time="2026-01-14T06:29:02.285600940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:29:02.598064 containerd[1636]: time="2026-01-14T06:29:02.597730304Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:02.599365 containerd[1636]: time="2026-01-14T06:29:02.599182826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:29:02.599365 containerd[1636]: time="2026-01-14T06:29:02.599314795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:02.599765 kubelet[2961]: E0114 06:29:02.599694 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:29:02.600692 kubelet[2961]: E0114 06:29:02.600296 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:29:02.600692 kubelet[2961]: E0114 06:29:02.600605 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lv9nq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-jdf9d_calico-apiserver(b2f6e747-eff5-4e8e-b242-bf44361cfc2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:02.602286 kubelet[2961]: E0114 06:29:02.601829 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:29:03.285004 kubelet[2961]: E0114 06:29:03.284884 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:29:03.868633 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 14 06:29:03.868825 kernel: audit: type=1130 audit(1768372143.862:920): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.48.98:22-20.161.92.111:60808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:03.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.48.98:22-20.161.92.111:60808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:03.862862 systemd[1]: Started sshd@30-10.230.48.98:22-20.161.92.111:60808.service - OpenSSH per-connection server daemon (20.161.92.111:60808). Jan 14 06:29:04.287675 containerd[1636]: time="2026-01-14T06:29:04.287607871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 06:29:04.426000 audit[5586]: USER_ACCT pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.428632 sshd[5586]: Accepted publickey for core from 20.161.92.111 port 60808 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:29:04.431096 sshd-session[5586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:29:04.432609 kernel: audit: type=1101 audit(1768372144.426:921): pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.428000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.441660 kernel: audit: type=1103 audit(1768372144.428:922): pid=5586 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.428000 audit[5586]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea580a060 a2=3 a3=0 items=0 ppid=1 pid=5586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:04.448666 kernel: audit: type=1006 audit(1768372144.428:923): pid=5586 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 14 06:29:04.448903 kernel: audit: type=1300 audit(1768372144.428:923): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffea580a060 a2=3 a3=0 items=0 ppid=1 pid=5586 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:04.428000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:04.453588 kernel: audit: type=1327 audit(1768372144.428:923): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:04.456388 systemd-logind[1607]: New session 28 of user core. Jan 14 06:29:04.465855 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 06:29:04.471000 audit[5586]: USER_START pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.477000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.480114 kernel: audit: type=1105 audit(1768372144.471:924): pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.480209 kernel: audit: type=1103 audit(1768372144.477:925): pid=5590 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.593842 containerd[1636]: time="2026-01-14T06:29:04.593670419Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:04.595197 containerd[1636]: time="2026-01-14T06:29:04.595004936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 06:29:04.595197 containerd[1636]: time="2026-01-14T06:29:04.595095153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:04.595548 kubelet[2961]: E0114 06:29:04.595466 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:29:04.596051 kubelet[2961]: E0114 06:29:04.595612 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 06:29:04.596051 kubelet[2961]: E0114 06:29:04.595954 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkclb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-545d979dcd-spmtb_calico-apiserver(e3bb8bbd-f33f-49cb-94d5-84718a161600): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:04.598447 kubelet[2961]: E0114 06:29:04.597798 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:29:04.886112 sshd[5590]: Connection closed by 20.161.92.111 port 60808 Jan 14 06:29:04.887236 sshd-session[5586]: pam_unix(sshd:session): session closed for user core Jan 14 06:29:04.890000 audit[5586]: USER_END pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.899319 systemd[1]: sshd@30-10.230.48.98:22-20.161.92.111:60808.service: Deactivated successfully. Jan 14 06:29:04.900606 kernel: audit: type=1106 audit(1768372144.890:926): pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.901766 kernel: audit: type=1104 audit(1768372144.890:927): pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.890000 audit[5586]: CRED_DISP pid=5586 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:04.905522 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 06:29:04.907422 systemd-logind[1607]: Session 28 logged out. Waiting for processes to exit. Jan 14 06:29:04.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.230.48.98:22-20.161.92.111:60808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:04.909736 systemd-logind[1607]: Removed session 28. Jan 14 06:29:06.000000 audit[5603]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:29:06.000000 audit[5603]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff506ee3d0 a2=0 a3=7fff506ee3bc items=0 ppid=3077 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:06.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:29:06.010000 audit[5603]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 06:29:06.010000 audit[5603]: SYSCALL arch=c000003e syscall=46 success=yes exit=48684 a0=3 a1=7fff506ee3d0 a2=0 a3=7fff506ee3bc items=0 ppid=3077 pid=5603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:06.010000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 06:29:06.284941 containerd[1636]: time="2026-01-14T06:29:06.284835198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 06:29:06.330823 systemd[1]: Started sshd@31-10.230.48.98:22-64.225.73.213:54312.service - OpenSSH per-connection server daemon (64.225.73.213:54312). Jan 14 06:29:06.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.48.98:22-64.225.73.213:54312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:06.464043 sshd[5605]: Invalid user postgres from 64.225.73.213 port 54312 Jan 14 06:29:06.482582 sshd[5605]: Connection closed by invalid user postgres 64.225.73.213 port 54312 [preauth] Jan 14 06:29:06.483000 audit[5605]: USER_ERR pid=5605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=64.225.73.213 addr=64.225.73.213 terminal=ssh res=failed' Jan 14 06:29:06.486092 systemd[1]: sshd@31-10.230.48.98:22-64.225.73.213:54312.service: Deactivated successfully. Jan 14 06:29:06.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.230.48.98:22-64.225.73.213:54312 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:06.604145 containerd[1636]: time="2026-01-14T06:29:06.603755740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:06.605375 containerd[1636]: time="2026-01-14T06:29:06.605240953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 06:29:06.606161 containerd[1636]: time="2026-01-14T06:29:06.605360695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:06.606247 kubelet[2961]: E0114 06:29:06.605586 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:29:06.606247 kubelet[2961]: E0114 06:29:06.605669 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 06:29:06.606247 kubelet[2961]: E0114 06:29:06.606150 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:06.607916 containerd[1636]: time="2026-01-14T06:29:06.606145020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 06:29:06.919237 containerd[1636]: time="2026-01-14T06:29:06.918725740Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:06.920300 containerd[1636]: time="2026-01-14T06:29:06.920224800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 06:29:06.920537 containerd[1636]: time="2026-01-14T06:29:06.920286518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:06.921089 kubelet[2961]: E0114 06:29:06.920973 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:29:06.921089 kubelet[2961]: E0114 06:29:06.921075 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 06:29:06.922192 kubelet[2961]: E0114 06:29:06.921459 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:d862720089aa46bd9f413e550c532138,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:06.922543 containerd[1636]: time="2026-01-14T06:29:06.922435609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 06:29:07.235052 containerd[1636]: time="2026-01-14T06:29:07.234707882Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:07.237247 containerd[1636]: time="2026-01-14T06:29:07.237099585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 06:29:07.237247 containerd[1636]: time="2026-01-14T06:29:07.237134085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:07.237513 kubelet[2961]: E0114 06:29:07.237436 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:29:07.237513 kubelet[2961]: E0114 06:29:07.237507 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 06:29:07.238498 kubelet[2961]: E0114 06:29:07.238406 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmpb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7g86f_calico-system(a5a91150-6e37-4bc7-abb4-c895c0d189ea): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:07.239422 containerd[1636]: time="2026-01-14T06:29:07.239354600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 06:29:07.240288 kubelet[2961]: E0114 06:29:07.240226 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:29:07.553809 containerd[1636]: time="2026-01-14T06:29:07.553717040Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:07.555336 containerd[1636]: time="2026-01-14T06:29:07.554812588Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 06:29:07.555336 containerd[1636]: time="2026-01-14T06:29:07.554908083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:07.556841 kubelet[2961]: E0114 06:29:07.555598 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:29:07.556841 kubelet[2961]: E0114 06:29:07.555663 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 06:29:07.556841 kubelet[2961]: E0114 06:29:07.555833 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zl67l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78797b75b4-rh22t_calico-system(dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:07.557276 kubelet[2961]: E0114 06:29:07.557200 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd" Jan 14 06:29:09.998600 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 14 06:29:09.998809 kernel: audit: type=1130 audit(1768372149.992:934): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.48.98:22-20.161.92.111:60824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:09.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.48.98:22-20.161.92.111:60824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:09.993107 systemd[1]: Started sshd@32-10.230.48.98:22-20.161.92.111:60824.service - OpenSSH per-connection server daemon (20.161.92.111:60824). Jan 14 06:29:10.532081 sshd[5624]: Accepted publickey for core from 20.161.92.111 port 60824 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:29:10.531000 audit[5624]: USER_ACCT pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.537315 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:29:10.539660 kernel: audit: type=1101 audit(1768372150.531:935): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.532000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.547841 kernel: audit: type=1103 audit(1768372150.532:936): pid=5624 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.547908 kernel: audit: type=1006 audit(1768372150.532:937): pid=5624 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jan 14 06:29:10.532000 audit[5624]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff189d7c20 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:10.557613 kernel: audit: type=1300 audit(1768372150.532:937): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7fff189d7c20 a2=3 a3=0 items=0 ppid=1 pid=5624 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:10.532000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:10.562770 systemd-logind[1607]: New session 29 of user core. Jan 14 06:29:10.565658 kernel: audit: type=1327 audit(1768372150.532:937): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:10.569156 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 06:29:10.578000 audit[5624]: USER_START pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.585598 kernel: audit: type=1105 audit(1768372150.578:938): pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.587000 audit[5631]: CRED_ACQ pid=5631 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:10.595051 kernel: audit: type=1103 audit(1768372150.587:939): pid=5631 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:11.027560 sshd[5631]: Connection closed by 20.161.92.111 port 60824 Jan 14 06:29:11.029622 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Jan 14 06:29:11.034000 audit[5624]: USER_END pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:11.041883 kernel: audit: type=1106 audit(1768372151.034:940): pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:11.044271 systemd[1]: sshd@32-10.230.48.98:22-20.161.92.111:60824.service: Deactivated successfully. Jan 14 06:29:11.040000 audit[5624]: CRED_DISP pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:11.053585 kernel: audit: type=1104 audit(1768372151.040:941): pid=5624 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:11.053719 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 06:29:11.055783 systemd-logind[1607]: Session 29 logged out. Waiting for processes to exit. Jan 14 06:29:11.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.230.48.98:22-20.161.92.111:60824 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:11.060078 systemd-logind[1607]: Removed session 29. Jan 14 06:29:12.290600 kubelet[2961]: E0114 06:29:12.290367 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-549d4b77bd-jwpts" podUID="e59a89b9-4020-44eb-8f82-b847f03cedae" Jan 14 06:29:16.133272 systemd[1]: Started sshd@33-10.230.48.98:22-20.161.92.111:43172.service - OpenSSH per-connection server daemon (20.161.92.111:43172). Jan 14 06:29:16.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.230.48.98:22-20.161.92.111:43172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:16.135129 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 06:29:16.135232 kernel: audit: type=1130 audit(1768372156.132:943): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.230.48.98:22-20.161.92.111:43172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:16.289297 containerd[1636]: time="2026-01-14T06:29:16.288880552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 06:29:16.291552 kubelet[2961]: E0114 06:29:16.291380 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-jdf9d" podUID="b2f6e747-eff5-4e8e-b242-bf44361cfc2b" Jan 14 06:29:16.613820 containerd[1636]: time="2026-01-14T06:29:16.613761803Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 06:29:16.614954 containerd[1636]: time="2026-01-14T06:29:16.614912818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 06:29:16.615107 containerd[1636]: time="2026-01-14T06:29:16.615025063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 06:29:16.616847 kubelet[2961]: E0114 06:29:16.616786 2961 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:29:16.616929 kubelet[2961]: E0114 06:29:16.616861 2961 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 06:29:16.617221 kubelet[2961]: E0114 06:29:16.617113 2961 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6bt9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-brmbk_calico-system(c58c893f-2e4d-4df6-aa40-06b84b7b6bbc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 06:29:16.618800 kubelet[2961]: E0114 06:29:16.618762 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-brmbk" podUID="c58c893f-2e4d-4df6-aa40-06b84b7b6bbc" Jan 14 06:29:16.677000 audit[5646]: USER_ACCT pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.687368 kernel: audit: type=1101 audit(1768372156.677:944): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.687472 sshd[5646]: Accepted publickey for core from 20.161.92.111 port 43172 ssh2: RSA SHA256:tj463jkrTT8lF3ZvXQZumbZgiwM6q5Q7/2H7rjo1bUE Jan 14 06:29:16.694445 sshd-session[5646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 06:29:16.691000 audit[5646]: CRED_ACQ pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.699595 kernel: audit: type=1103 audit(1768372156.691:945): pid=5646 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.703597 kernel: audit: type=1006 audit(1768372156.692:946): pid=5646 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jan 14 06:29:16.692000 audit[5646]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1d3a5010 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:16.692000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:16.713385 kernel: audit: type=1300 audit(1768372156.692:946): arch=c000003e syscall=1 success=yes exit=3 a0=8 a1=7ffc1d3a5010 a2=3 a3=0 items=0 ppid=1 pid=5646 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 06:29:16.713457 kernel: audit: type=1327 audit(1768372156.692:946): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 06:29:16.722114 systemd-logind[1607]: New session 30 of user core. Jan 14 06:29:16.727890 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 14 06:29:16.735000 audit[5646]: USER_START pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.749606 kernel: audit: type=1105 audit(1768372156.735:947): pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.749000 audit[5650]: CRED_ACQ pid=5650 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:16.755623 kernel: audit: type=1103 audit(1768372156.749:948): pid=5650 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:17.103322 sshd[5650]: Connection closed by 20.161.92.111 port 43172 Jan 14 06:29:17.105097 sshd-session[5646]: pam_unix(sshd:session): session closed for user core Jan 14 06:29:17.107000 audit[5646]: USER_END pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:17.119625 kernel: audit: type=1106 audit(1768372157.107:949): pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:17.123510 systemd-logind[1607]: Session 30 logged out. Waiting for processes to exit. Jan 14 06:29:17.124279 systemd[1]: sshd@33-10.230.48.98:22-20.161.92.111:43172.service: Deactivated successfully. Jan 14 06:29:17.128165 systemd[1]: session-30.scope: Deactivated successfully. Jan 14 06:29:17.107000 audit[5646]: CRED_DISP pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:17.133207 systemd-logind[1607]: Removed session 30. Jan 14 06:29:17.137611 kernel: audit: type=1104 audit(1768372157.107:950): pid=5646 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 06:29:17.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-10.230.48.98:22-20.161.92.111:43172 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 06:29:17.286131 kubelet[2961]: E0114 06:29:17.285117 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-545d979dcd-spmtb" podUID="e3bb8bbd-f33f-49cb-94d5-84718a161600" Jan 14 06:29:18.289179 kubelet[2961]: E0114 06:29:18.289084 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7g86f" podUID="a5a91150-6e37-4bc7-abb4-c895c0d189ea" Jan 14 06:29:20.291025 kubelet[2961]: E0114 06:29:20.290880 2961 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-78797b75b4-rh22t" podUID="dd1b4bf9-58c2-4dd7-a38f-7c8b7d976bbd"