Jan 13 23:47:29.378383 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 23:47:29.378406 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 21:43:11 -00 2026 Jan 13 23:47:29.378417 kernel: KASLR enabled Jan 13 23:47:29.378423 kernel: efi: EFI v2.7 by EDK II Jan 13 23:47:29.378430 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438357218 Jan 13 23:47:29.378435 kernel: random: crng init done Jan 13 23:47:29.378443 kernel: secureboot: Secure boot disabled Jan 13 23:47:29.378449 kernel: ACPI: Early table checksum verification disabled Jan 13 23:47:29.378455 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 13 23:47:29.378462 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 13 23:47:29.378469 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378475 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378481 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378488 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378497 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378504 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378510 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378517 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378523 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378530 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 23:47:29.378536 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 13 23:47:29.378543 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 13 23:47:29.378549 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 13 23:47:29.378557 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 13 23:47:29.378563 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 13 23:47:29.378570 kernel: Zone ranges: Jan 13 23:47:29.378576 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 13 23:47:29.378583 kernel: DMA32 empty Jan 13 23:47:29.378589 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 13 23:47:29.378595 kernel: Device empty Jan 13 23:47:29.378602 kernel: Movable zone start for each node Jan 13 23:47:29.378608 kernel: Early memory node ranges Jan 13 23:47:29.378615 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 13 23:47:29.378621 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 13 23:47:29.378628 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 13 23:47:29.378636 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 13 23:47:29.378642 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 13 23:47:29.378649 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 13 23:47:29.378655 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 13 23:47:29.378662 kernel: psci: probing for conduit method from ACPI. Jan 13 23:47:29.378671 kernel: psci: PSCIv1.3 detected in firmware. Jan 13 23:47:29.378679 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 23:47:29.378686 kernel: psci: Trusted OS migration not required Jan 13 23:47:29.378693 kernel: psci: SMC Calling Convention v1.1 Jan 13 23:47:29.378700 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 23:47:29.378707 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 13 23:47:29.378714 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 13 23:47:29.378721 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 13 23:47:29.378728 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 13 23:47:29.378736 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 13 23:47:29.378743 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 13 23:47:29.378750 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 23:47:29.378756 kernel: Detected PIPT I-cache on CPU0 Jan 13 23:47:29.378763 kernel: CPU features: detected: GIC system register CPU interface Jan 13 23:47:29.378770 kernel: CPU features: detected: Spectre-v4 Jan 13 23:47:29.378777 kernel: CPU features: detected: Spectre-BHB Jan 13 23:47:29.378784 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 23:47:29.378791 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 23:47:29.378798 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 23:47:29.378805 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 23:47:29.378813 kernel: alternatives: applying boot alternatives Jan 13 23:47:29.378821 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:47:29.378828 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 13 23:47:29.378835 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 13 23:47:29.378842 kernel: Fallback order for Node 0: 0 Jan 13 23:47:29.378849 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 13 23:47:29.378855 kernel: Policy zone: Normal Jan 13 23:47:29.378862 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 23:47:29.378869 kernel: software IO TLB: area num 4. Jan 13 23:47:29.378876 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 13 23:47:29.378885 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 23:47:29.378891 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 23:47:29.378899 kernel: rcu: RCU event tracing is enabled. Jan 13 23:47:29.378906 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 23:47:29.378913 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 23:47:29.378920 kernel: Tracing variant of Tasks RCU enabled. Jan 13 23:47:29.378927 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 23:47:29.378934 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 23:47:29.378941 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 23:47:29.378949 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 23:47:29.378972 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 23:47:29.378981 kernel: GICv3: 256 SPIs implemented Jan 13 23:47:29.378988 kernel: GICv3: 0 Extended SPIs implemented Jan 13 23:47:29.378995 kernel: Root IRQ handler: gic_handle_irq Jan 13 23:47:29.379002 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 23:47:29.379009 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 13 23:47:29.379015 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 23:47:29.379022 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 23:47:29.379029 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 13 23:47:29.379037 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 13 23:47:29.379043 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 13 23:47:29.379050 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 13 23:47:29.379057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 23:47:29.379066 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 23:47:29.379073 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 23:47:29.379080 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 23:47:29.379094 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 23:47:29.379102 kernel: arm-pv: using stolen time PV Jan 13 23:47:29.379109 kernel: Console: colour dummy device 80x25 Jan 13 23:47:29.379117 kernel: ACPI: Core revision 20240827 Jan 13 23:47:29.379124 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 23:47:29.379133 kernel: pid_max: default: 32768 minimum: 301 Jan 13 23:47:29.379141 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 13 23:47:29.379148 kernel: landlock: Up and running. Jan 13 23:47:29.379155 kernel: SELinux: Initializing. Jan 13 23:47:29.379163 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:47:29.379170 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 23:47:29.379178 kernel: rcu: Hierarchical SRCU implementation. Jan 13 23:47:29.379185 kernel: rcu: Max phase no-delay instances is 400. Jan 13 23:47:29.379194 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 13 23:47:29.379201 kernel: Remapping and enabling EFI services. Jan 13 23:47:29.379208 kernel: smp: Bringing up secondary CPUs ... Jan 13 23:47:29.379215 kernel: Detected PIPT I-cache on CPU1 Jan 13 23:47:29.379223 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 23:47:29.379230 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 13 23:47:29.379238 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 23:47:29.379246 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 23:47:29.379254 kernel: Detected PIPT I-cache on CPU2 Jan 13 23:47:29.379266 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 23:47:29.379275 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 13 23:47:29.379282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 23:47:29.379290 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 23:47:29.379297 kernel: Detected PIPT I-cache on CPU3 Jan 13 23:47:29.379305 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 23:47:29.379314 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 13 23:47:29.379322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 23:47:29.379329 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 23:47:29.379337 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 23:47:29.379344 kernel: SMP: Total of 4 processors activated. Jan 13 23:47:29.379352 kernel: CPU: All CPU(s) started at EL1 Jan 13 23:47:29.379361 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 23:47:29.379369 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 23:47:29.379377 kernel: CPU features: detected: Common not Private translations Jan 13 23:47:29.379385 kernel: CPU features: detected: CRC32 instructions Jan 13 23:47:29.379393 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 23:47:29.379400 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 23:47:29.379408 kernel: CPU features: detected: LSE atomic instructions Jan 13 23:47:29.379417 kernel: CPU features: detected: Privileged Access Never Jan 13 23:47:29.379424 kernel: CPU features: detected: RAS Extension Support Jan 13 23:47:29.379432 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 23:47:29.379440 kernel: alternatives: applying system-wide alternatives Jan 13 23:47:29.379447 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 13 23:47:29.379456 kernel: Memory: 16324368K/16777216K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 430064K reserved, 16384K cma-reserved) Jan 13 23:47:29.379463 kernel: devtmpfs: initialized Jan 13 23:47:29.379472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 23:47:29.379480 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 23:47:29.379488 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 23:47:29.379495 kernel: 0 pages in range for non-PLT usage Jan 13 23:47:29.379503 kernel: 515152 pages in range for PLT usage Jan 13 23:47:29.379510 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 23:47:29.379518 kernel: SMBIOS 3.0.0 present. Jan 13 23:47:29.379525 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 13 23:47:29.379534 kernel: DMI: Memory slots populated: 1/1 Jan 13 23:47:29.379542 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 23:47:29.379549 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 13 23:47:29.379557 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 23:47:29.379565 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 23:47:29.379572 kernel: audit: initializing netlink subsys (disabled) Jan 13 23:47:29.379580 kernel: audit: type=2000 audit(0.038:1): state=initialized audit_enabled=0 res=1 Jan 13 23:47:29.379589 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 23:47:29.379596 kernel: cpuidle: using governor menu Jan 13 23:47:29.379604 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 23:47:29.379612 kernel: ASID allocator initialised with 32768 entries Jan 13 23:47:29.379619 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 23:47:29.379627 kernel: Serial: AMBA PL011 UART driver Jan 13 23:47:29.379635 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 23:47:29.379644 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 23:47:29.379651 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 23:47:29.379659 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 23:47:29.379667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 23:47:29.379674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 23:47:29.379681 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 23:47:29.379689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 23:47:29.379696 kernel: ACPI: Added _OSI(Module Device) Jan 13 23:47:29.379705 kernel: ACPI: Added _OSI(Processor Device) Jan 13 23:47:29.379713 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 23:47:29.379721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 23:47:29.379728 kernel: ACPI: Interpreter enabled Jan 13 23:47:29.379736 kernel: ACPI: Using GIC for interrupt routing Jan 13 23:47:29.379744 kernel: ACPI: MCFG table detected, 1 entries Jan 13 23:47:29.379752 kernel: ACPI: CPU0 has been hot-added Jan 13 23:47:29.379760 kernel: ACPI: CPU1 has been hot-added Jan 13 23:47:29.379768 kernel: ACPI: CPU2 has been hot-added Jan 13 23:47:29.379775 kernel: ACPI: CPU3 has been hot-added Jan 13 23:47:29.379783 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 23:47:29.379791 kernel: printk: legacy console [ttyAMA0] enabled Jan 13 23:47:29.379798 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 23:47:29.379987 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 23:47:29.380088 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 23:47:29.380170 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 23:47:29.380252 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 23:47:29.380332 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 23:47:29.380341 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 23:47:29.380350 kernel: PCI host bridge to bus 0000:00 Jan 13 23:47:29.380454 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 23:47:29.380564 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 23:47:29.380641 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 23:47:29.380714 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 23:47:29.380818 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 13 23:47:29.380913 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.381019 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 13 23:47:29.381111 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 23:47:29.381205 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 13 23:47:29.381285 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 13 23:47:29.381381 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.381467 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 13 23:47:29.381550 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 23:47:29.381632 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 13 23:47:29.381721 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.381804 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 13 23:47:29.381888 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 13 23:47:29.381976 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 13 23:47:29.382058 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 13 23:47:29.382146 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.382227 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 13 23:47:29.382306 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 13 23:47:29.382403 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 13 23:47:29.382493 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.382573 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 13 23:47:29.382652 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 13 23:47:29.382731 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 13 23:47:29.382811 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 13 23:47:29.382901 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.382992 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 13 23:47:29.383073 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 13 23:47:29.383153 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 13 23:47:29.383234 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 13 23:47:29.383321 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.383405 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 13 23:47:29.383484 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 13 23:47:29.383574 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.383654 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 13 23:47:29.383735 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 13 23:47:29.383822 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.383906 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 13 23:47:29.384001 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 13 23:47:29.384093 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.384174 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 13 23:47:29.384257 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 13 23:47:29.384343 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.384424 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 13 23:47:29.384505 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 13 23:47:29.384593 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.384675 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 13 23:47:29.384759 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 13 23:47:29.384849 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.384935 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 13 23:47:29.385043 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 13 23:47:29.385139 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.385248 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 13 23:47:29.385334 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 13 23:47:29.385425 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.385506 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 13 23:47:29.385585 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 13 23:47:29.385671 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.385753 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 13 23:47:29.385832 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 13 23:47:29.385924 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.386015 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 13 23:47:29.386097 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 13 23:47:29.386183 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.386273 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 13 23:47:29.386366 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 13 23:47:29.386449 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 13 23:47:29.386530 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 13 23:47:29.386618 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.386699 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 13 23:47:29.386783 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 13 23:47:29.386863 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 13 23:47:29.386943 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 13 23:47:29.387041 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.387124 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 13 23:47:29.387204 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 13 23:47:29.387286 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 13 23:47:29.387367 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 13 23:47:29.387453 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.387535 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 13 23:47:29.387615 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 13 23:47:29.387695 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 13 23:47:29.387778 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 13 23:47:29.387867 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.387947 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 13 23:47:29.388042 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 13 23:47:29.388124 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 13 23:47:29.388205 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 13 23:47:29.388295 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.388377 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 13 23:47:29.388457 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 13 23:47:29.388538 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 13 23:47:29.388621 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 13 23:47:29.388711 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.388798 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 13 23:47:29.388883 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 13 23:47:29.388995 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 13 23:47:29.389084 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 13 23:47:29.389181 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.389266 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 13 23:47:29.389349 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 13 23:47:29.389428 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 13 23:47:29.389508 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 23:47:29.389594 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.389674 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 13 23:47:29.389753 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 13 23:47:29.389834 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 13 23:47:29.389922 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 23:47:29.390028 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.390117 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 13 23:47:29.390198 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 13 23:47:29.390276 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 13 23:47:29.390370 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 23:47:29.390459 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.390540 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 13 23:47:29.390639 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 13 23:47:29.390721 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 13 23:47:29.390803 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 23:47:29.390895 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.390985 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 13 23:47:29.391067 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 13 23:47:29.391152 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 13 23:47:29.391235 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 23:47:29.391325 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.391407 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 13 23:47:29.391487 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 13 23:47:29.391569 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 13 23:47:29.391651 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 13 23:47:29.391738 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.391819 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 13 23:47:29.391898 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 13 23:47:29.392003 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 13 23:47:29.392103 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 13 23:47:29.392198 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.392279 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 13 23:47:29.392359 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 13 23:47:29.392438 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 13 23:47:29.392517 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 13 23:47:29.392606 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 13 23:47:29.392689 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 13 23:47:29.392769 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 13 23:47:29.392848 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 13 23:47:29.392927 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 23:47:29.393041 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 13 23:47:29.393127 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 13 23:47:29.393212 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 23:47:29.393293 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 13 23:47:29.393383 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 13 23:47:29.393465 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 13 23:47:29.393555 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 13 23:47:29.393639 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 13 23:47:29.393720 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 13 23:47:29.393808 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 13 23:47:29.393891 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 13 23:47:29.393992 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 13 23:47:29.394089 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 13 23:47:29.394177 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 13 23:47:29.394269 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 13 23:47:29.394364 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 13 23:47:29.394449 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 13 23:47:29.394532 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 13 23:47:29.394617 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 13 23:47:29.394697 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 13 23:47:29.394784 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 13 23:47:29.394865 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 13 23:47:29.394947 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 13 23:47:29.395041 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 13 23:47:29.395122 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 13 23:47:29.395203 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 13 23:47:29.395285 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 13 23:47:29.395367 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 13 23:47:29.395447 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 13 23:47:29.395530 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 13 23:47:29.395610 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 13 23:47:29.395690 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 13 23:47:29.395792 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 13 23:47:29.395881 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 13 23:47:29.395974 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 13 23:47:29.396061 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 13 23:47:29.396142 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 13 23:47:29.396222 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 13 23:47:29.396307 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 13 23:47:29.396391 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 13 23:47:29.396470 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 13 23:47:29.396554 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 13 23:47:29.396634 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 13 23:47:29.396714 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 13 23:47:29.396800 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 13 23:47:29.396880 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 13 23:47:29.396968 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 13 23:47:29.397057 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 13 23:47:29.397137 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 13 23:47:29.397216 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 13 23:47:29.397305 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 13 23:47:29.397385 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 13 23:47:29.397464 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 13 23:47:29.397547 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 13 23:47:29.397627 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 13 23:47:29.397706 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 13 23:47:29.397793 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 13 23:47:29.397874 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 13 23:47:29.397953 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 13 23:47:29.398057 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 13 23:47:29.398138 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 13 23:47:29.398222 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 13 23:47:29.398308 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 13 23:47:29.398408 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 13 23:47:29.398489 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 13 23:47:29.398574 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 13 23:47:29.398655 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 13 23:47:29.398739 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 13 23:47:29.398823 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 13 23:47:29.398904 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 13 23:47:29.398994 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 13 23:47:29.399079 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 13 23:47:29.399162 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 13 23:47:29.399243 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 13 23:47:29.399327 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 13 23:47:29.399408 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 13 23:47:29.399487 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 13 23:47:29.399572 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 13 23:47:29.399655 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 13 23:47:29.399735 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 13 23:47:29.399820 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 13 23:47:29.399900 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 13 23:47:29.400005 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 13 23:47:29.400094 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 13 23:47:29.400179 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 13 23:47:29.400259 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 13 23:47:29.400341 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 13 23:47:29.400422 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 13 23:47:29.400503 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 13 23:47:29.400593 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 13 23:47:29.400675 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 13 23:47:29.400755 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 13 23:47:29.400840 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 13 23:47:29.400921 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 13 23:47:29.401019 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 13 23:47:29.401110 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 13 23:47:29.401191 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 13 23:47:29.401270 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 13 23:47:29.401354 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 13 23:47:29.401435 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 13 23:47:29.401522 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 13 23:47:29.401605 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 13 23:47:29.401688 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 13 23:47:29.401796 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 13 23:47:29.401886 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 13 23:47:29.401976 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 13 23:47:29.402061 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 13 23:47:29.402144 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 13 23:47:29.402224 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 13 23:47:29.402304 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 13 23:47:29.402400 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 13 23:47:29.402483 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 13 23:47:29.402566 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 13 23:47:29.402648 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 13 23:47:29.402729 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 13 23:47:29.402808 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 13 23:47:29.402891 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 13 23:47:29.402984 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 13 23:47:29.403078 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 13 23:47:29.403161 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 13 23:47:29.403246 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 13 23:47:29.403326 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 13 23:47:29.403415 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 13 23:47:29.403495 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 13 23:47:29.403580 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 13 23:47:29.403661 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 13 23:47:29.403742 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 13 23:47:29.403822 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 13 23:47:29.403905 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 13 23:47:29.404005 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 13 23:47:29.404093 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 13 23:47:29.404174 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 13 23:47:29.404257 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 13 23:47:29.404339 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 13 23:47:29.404421 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 13 23:47:29.404503 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 13 23:47:29.404587 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 13 23:47:29.404668 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 13 23:47:29.404753 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 13 23:47:29.404840 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 13 23:47:29.404921 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 13 23:47:29.405026 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 13 23:47:29.405113 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 13 23:47:29.405198 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 13 23:47:29.405282 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 13 23:47:29.405362 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 13 23:47:29.405447 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 13 23:47:29.405529 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 13 23:47:29.405612 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 13 23:47:29.405697 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 13 23:47:29.405780 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 13 23:47:29.405862 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 13 23:47:29.405946 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 13 23:47:29.406036 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 13 23:47:29.406121 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 13 23:47:29.406224 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 13 23:47:29.406312 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 13 23:47:29.406408 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 13 23:47:29.406497 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 13 23:47:29.406582 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 13 23:47:29.406668 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 13 23:47:29.406754 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 13 23:47:29.406862 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 13 23:47:29.406947 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 13 23:47:29.407053 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 13 23:47:29.407135 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 13 23:47:29.407219 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 13 23:47:29.407302 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 13 23:47:29.407391 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 13 23:47:29.407476 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 13 23:47:29.407580 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 13 23:47:29.407663 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 13 23:47:29.407748 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 13 23:47:29.407828 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 13 23:47:29.407918 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 13 23:47:29.408010 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 13 23:47:29.408094 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 13 23:47:29.408175 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 13 23:47:29.408260 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 13 23:47:29.408344 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 13 23:47:29.408432 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 13 23:47:29.408516 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 13 23:47:29.408601 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 13 23:47:29.408685 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 13 23:47:29.408770 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 13 23:47:29.408853 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 13 23:47:29.408935 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 13 23:47:29.409044 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 13 23:47:29.409132 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 13 23:47:29.409214 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 13 23:47:29.409316 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 13 23:47:29.409398 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 13 23:47:29.409484 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 13 23:47:29.409568 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 13 23:47:29.409657 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 13 23:47:29.409742 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 13 23:47:29.409828 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 13 23:47:29.409932 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 13 23:47:29.410033 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 13 23:47:29.410116 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 13 23:47:29.410201 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 13 23:47:29.410289 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 13 23:47:29.410383 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 13 23:47:29.410467 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 13 23:47:29.410549 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 13 23:47:29.410629 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 13 23:47:29.410711 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 13 23:47:29.410793 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 13 23:47:29.410875 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 13 23:47:29.410963 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 13 23:47:29.411064 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 13 23:47:29.411156 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 13 23:47:29.411237 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 13 23:47:29.411318 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.411398 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.411479 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 13 23:47:29.411558 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.411640 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.411722 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 13 23:47:29.411801 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.411880 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.411984 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 13 23:47:29.412071 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.412158 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.412244 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 13 23:47:29.412324 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.412403 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.412485 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 13 23:47:29.412565 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.412645 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.412750 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 13 23:47:29.412830 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.412911 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.413017 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 13 23:47:29.413101 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.413181 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.413267 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 13 23:47:29.413346 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.413426 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.413508 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 13 23:47:29.413588 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.413668 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.413749 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 13 23:47:29.413830 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.413909 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.414000 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 13 23:47:29.414082 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.414162 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.414244 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 13 23:47:29.414326 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.414419 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.414501 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 13 23:47:29.414580 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.414659 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.414741 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 13 23:47:29.414824 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.414904 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.414993 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 13 23:47:29.415076 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.415156 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.415239 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 13 23:47:29.415318 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.415400 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.415482 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 13 23:47:29.415561 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.415640 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.415721 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 13 23:47:29.415801 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 13 23:47:29.415882 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 13 23:47:29.415978 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 13 23:47:29.416062 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 13 23:47:29.416144 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 13 23:47:29.416228 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 13 23:47:29.416309 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 13 23:47:29.416389 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 13 23:47:29.416472 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 13 23:47:29.416553 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 13 23:47:29.416634 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 13 23:47:29.416715 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 13 23:47:29.416799 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 13 23:47:29.416880 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 13 23:47:29.416971 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417056 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417141 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417221 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417301 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417381 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417460 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417540 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417622 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417705 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417786 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.417866 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.417948 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418041 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418124 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418207 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418289 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418391 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418477 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418557 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418639 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418721 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418804 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.418884 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.418977 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419061 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419144 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419228 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419309 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419390 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419472 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419553 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419637 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419720 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419802 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 13 23:47:29.419882 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 13 23:47:29.419995 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 13 23:47:29.420082 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 13 23:47:29.420168 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 13 23:47:29.420249 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 13 23:47:29.420329 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 13 23:47:29.420409 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 23:47:29.420497 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 13 23:47:29.420578 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 13 23:47:29.420661 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 13 23:47:29.420742 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 23:47:29.420829 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 13 23:47:29.420912 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 13 23:47:29.421022 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 13 23:47:29.421106 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 13 23:47:29.421189 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 23:47:29.421276 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 13 23:47:29.421357 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 13 23:47:29.421437 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 13 23:47:29.421516 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 23:47:29.421603 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 13 23:47:29.421688 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 13 23:47:29.421768 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 13 23:47:29.421849 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 13 23:47:29.421928 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 23:47:29.422069 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 13 23:47:29.422176 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 13 23:47:29.422263 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 13 23:47:29.422355 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 13 23:47:29.422441 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 23:47:29.422523 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 13 23:47:29.422605 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 13 23:47:29.422686 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 23:47:29.422770 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 13 23:47:29.422851 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 13 23:47:29.422940 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 23:47:29.423050 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 13 23:47:29.423133 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 13 23:47:29.423214 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 23:47:29.423298 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 13 23:47:29.423380 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 13 23:47:29.423460 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 13 23:47:29.423541 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 13 23:47:29.423630 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 13 23:47:29.423716 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 13 23:47:29.423798 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 13 23:47:29.423879 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 13 23:47:29.423970 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 13 23:47:29.424055 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 13 23:47:29.424138 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 13 23:47:29.424222 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 13 23:47:29.424305 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 13 23:47:29.424387 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 13 23:47:29.424469 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 13 23:47:29.424550 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 13 23:47:29.424632 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 13 23:47:29.424713 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 13 23:47:29.424801 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 13 23:47:29.424882 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 13 23:47:29.424969 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 13 23:47:29.425056 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 13 23:47:29.425138 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 13 23:47:29.425218 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 13 23:47:29.425301 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 13 23:47:29.425384 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 13 23:47:29.425466 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 13 23:47:29.425555 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 13 23:47:29.425638 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 13 23:47:29.425720 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 13 23:47:29.425820 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 13 23:47:29.425905 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 13 23:47:29.426018 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 13 23:47:29.426106 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 13 23:47:29.426186 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 13 23:47:29.426270 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 13 23:47:29.426362 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 13 23:47:29.426447 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 13 23:47:29.426527 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 13 23:47:29.426610 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 13 23:47:29.426693 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 13 23:47:29.426772 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 13 23:47:29.426858 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 13 23:47:29.426940 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 13 23:47:29.427030 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 13 23:47:29.427112 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 13 23:47:29.427192 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 13 23:47:29.427278 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 13 23:47:29.427359 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 13 23:47:29.427454 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 13 23:47:29.427538 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 13 23:47:29.427621 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 13 23:47:29.427703 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 13 23:47:29.427787 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 13 23:47:29.427869 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 13 23:47:29.427966 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 13 23:47:29.428060 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 13 23:47:29.428147 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 13 23:47:29.428231 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 13 23:47:29.428316 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 13 23:47:29.428403 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 13 23:47:29.428482 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 13 23:47:29.428564 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 13 23:47:29.428649 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 13 23:47:29.428731 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 13 23:47:29.428813 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 13 23:47:29.428913 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 13 23:47:29.429009 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 13 23:47:29.429091 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 13 23:47:29.429171 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 13 23:47:29.429251 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 13 23:47:29.429333 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 13 23:47:29.429414 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 13 23:47:29.429496 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 13 23:47:29.429575 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 13 23:47:29.429657 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 13 23:47:29.429738 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 13 23:47:29.429821 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 13 23:47:29.429902 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 13 23:47:29.429994 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 13 23:47:29.430082 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 13 23:47:29.430164 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 13 23:47:29.430246 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 13 23:47:29.430329 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 13 23:47:29.430428 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 13 23:47:29.430510 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 13 23:47:29.430591 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 13 23:47:29.430676 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 23:47:29.430749 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 23:47:29.430822 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 23:47:29.430907 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 13 23:47:29.431024 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 13 23:47:29.431119 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 13 23:47:29.431197 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 13 23:47:29.431279 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 13 23:47:29.431354 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 13 23:47:29.431443 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 13 23:47:29.431518 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 13 23:47:29.431602 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 13 23:47:29.431691 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 13 23:47:29.431773 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 13 23:47:29.431850 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 13 23:47:29.431933 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 13 23:47:29.432028 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 13 23:47:29.432113 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 13 23:47:29.432189 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 13 23:47:29.432273 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 13 23:47:29.432350 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 13 23:47:29.432437 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 13 23:47:29.432512 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 13 23:47:29.432594 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 13 23:47:29.432669 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 13 23:47:29.432757 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 13 23:47:29.432835 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 13 23:47:29.432918 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 13 23:47:29.433019 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 13 23:47:29.433107 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 13 23:47:29.433183 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 13 23:47:29.433268 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 13 23:47:29.433343 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 13 23:47:29.433425 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 13 23:47:29.433501 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 13 23:47:29.433585 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 13 23:47:29.433661 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 13 23:47:29.433748 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 13 23:47:29.433828 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 13 23:47:29.433909 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 13 23:47:29.434002 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 13 23:47:29.434082 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 13 23:47:29.434164 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 13 23:47:29.434239 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 13 23:47:29.434313 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 13 23:47:29.434419 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 13 23:47:29.434498 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 13 23:47:29.434576 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 13 23:47:29.434656 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 13 23:47:29.434731 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 13 23:47:29.434805 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 13 23:47:29.434888 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 13 23:47:29.434978 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 13 23:47:29.435055 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 13 23:47:29.435137 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 13 23:47:29.435213 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 13 23:47:29.435287 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 13 23:47:29.435369 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 13 23:47:29.435449 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 13 23:47:29.435524 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 13 23:47:29.435605 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 13 23:47:29.435681 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 13 23:47:29.435756 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 13 23:47:29.435839 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 13 23:47:29.435916 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 13 23:47:29.436013 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 13 23:47:29.436101 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 13 23:47:29.436177 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 13 23:47:29.436251 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 13 23:47:29.436339 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 13 23:47:29.436416 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 13 23:47:29.436490 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 13 23:47:29.436579 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 13 23:47:29.436655 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 13 23:47:29.436730 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 13 23:47:29.436814 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 13 23:47:29.436890 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 13 23:47:29.436983 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 13 23:47:29.437068 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 13 23:47:29.437144 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 13 23:47:29.437221 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 13 23:47:29.437306 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 13 23:47:29.437381 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 13 23:47:29.437455 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 13 23:47:29.437466 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 23:47:29.437475 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 23:47:29.437483 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 23:47:29.437493 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 23:47:29.437502 kernel: iommu: Default domain type: Translated Jan 13 23:47:29.437510 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 23:47:29.437519 kernel: efivars: Registered efivars operations Jan 13 23:47:29.437527 kernel: vgaarb: loaded Jan 13 23:47:29.437536 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 23:47:29.437544 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 23:47:29.437553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 23:47:29.437562 kernel: pnp: PnP ACPI init Jan 13 23:47:29.437657 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 23:47:29.437668 kernel: pnp: PnP ACPI: found 1 devices Jan 13 23:47:29.437676 kernel: NET: Registered PF_INET protocol family Jan 13 23:47:29.437684 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 23:47:29.437693 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 13 23:47:29.437703 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 23:47:29.437711 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 13 23:47:29.437720 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 13 23:47:29.437728 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 13 23:47:29.437736 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 13 23:47:29.437744 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 13 23:47:29.437753 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 23:47:29.437846 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 13 23:47:29.437858 kernel: PCI: CLS 0 bytes, default 64 Jan 13 23:47:29.437866 kernel: kvm [1]: HYP mode not available Jan 13 23:47:29.437874 kernel: Initialise system trusted keyrings Jan 13 23:47:29.437883 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 13 23:47:29.437891 kernel: Key type asymmetric registered Jan 13 23:47:29.437899 kernel: Asymmetric key parser 'x509' registered Jan 13 23:47:29.437908 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 13 23:47:29.437916 kernel: io scheduler mq-deadline registered Jan 13 23:47:29.437924 kernel: io scheduler kyber registered Jan 13 23:47:29.437933 kernel: io scheduler bfq registered Jan 13 23:47:29.437942 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 23:47:29.438051 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 13 23:47:29.438137 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 13 23:47:29.438222 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.438308 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 13 23:47:29.438405 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 13 23:47:29.438487 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.438571 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 13 23:47:29.438655 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 13 23:47:29.438737 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.438828 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 13 23:47:29.438913 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 13 23:47:29.439007 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.439096 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 13 23:47:29.439179 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 13 23:47:29.439261 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.439359 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 13 23:47:29.439444 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 13 23:47:29.439526 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.439620 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 13 23:47:29.439705 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 13 23:47:29.439785 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.439874 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 13 23:47:29.439964 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 13 23:47:29.440062 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.440074 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 13 23:47:29.440157 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 13 23:47:29.440240 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 13 23:47:29.440323 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.440407 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 13 23:47:29.440488 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 13 23:47:29.440568 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.440651 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 13 23:47:29.440732 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 13 23:47:29.440813 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.440896 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 13 23:47:29.440994 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 13 23:47:29.441077 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.441159 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 13 23:47:29.441239 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 13 23:47:29.441318 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.441404 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 13 23:47:29.441484 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 13 23:47:29.441564 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.441645 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 13 23:47:29.441726 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 13 23:47:29.441805 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.441889 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 13 23:47:29.441978 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 13 23:47:29.442069 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.442081 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 13 23:47:29.442163 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 13 23:47:29.442249 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 13 23:47:29.442332 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.442431 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 13 23:47:29.442514 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 13 23:47:29.442595 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.442677 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 13 23:47:29.442760 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 13 23:47:29.442841 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.442928 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 13 23:47:29.443023 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 13 23:47:29.443104 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.443189 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 13 23:47:29.443270 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 13 23:47:29.443353 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.443441 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 13 23:47:29.443522 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 13 23:47:29.443603 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.443686 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 13 23:47:29.443768 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 13 23:47:29.443848 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.443935 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 13 23:47:29.444025 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 13 23:47:29.444107 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.444118 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 13 23:47:29.444200 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 13 23:47:29.444284 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 13 23:47:29.444368 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.444457 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 13 23:47:29.444559 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 13 23:47:29.444664 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.444750 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 13 23:47:29.444834 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 13 23:47:29.444913 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.445013 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 13 23:47:29.445097 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 13 23:47:29.445177 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.445265 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 13 23:47:29.445350 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 13 23:47:29.445433 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.445520 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 13 23:47:29.445603 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 13 23:47:29.445685 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.445777 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 13 23:47:29.445862 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 13 23:47:29.445948 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.446052 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 13 23:47:29.446139 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 13 23:47:29.446219 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.446306 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 13 23:47:29.446419 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 13 23:47:29.446503 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 13 23:47:29.446514 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 23:47:29.446525 kernel: ACPI: button: Power Button [PWRB] Jan 13 23:47:29.446612 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 13 23:47:29.446702 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 13 23:47:29.446713 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 23:47:29.446721 kernel: thunder_xcv, ver 1.0 Jan 13 23:47:29.446730 kernel: thunder_bgx, ver 1.0 Jan 13 23:47:29.446738 kernel: nicpf, ver 1.0 Jan 13 23:47:29.446748 kernel: nicvf, ver 1.0 Jan 13 23:47:29.446846 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 23:47:29.446926 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-13T23:47:28 UTC (1768348048) Jan 13 23:47:29.446936 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 23:47:29.446945 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 13 23:47:29.446953 kernel: NET: Registered PF_INET6 protocol family Jan 13 23:47:29.446978 kernel: watchdog: NMI not fully supported Jan 13 23:47:29.446987 kernel: watchdog: Hard watchdog permanently disabled Jan 13 23:47:29.446995 kernel: Segment Routing with IPv6 Jan 13 23:47:29.447003 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 23:47:29.447011 kernel: NET: Registered PF_PACKET protocol family Jan 13 23:47:29.447020 kernel: Key type dns_resolver registered Jan 13 23:47:29.447028 kernel: registered taskstats version 1 Jan 13 23:47:29.447036 kernel: Loading compiled-in X.509 certificates Jan 13 23:47:29.447045 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 61f104a5e4017e43c6bf0c9744e6a522053d7383' Jan 13 23:47:29.447053 kernel: Demotion targets for Node 0: null Jan 13 23:47:29.447062 kernel: Key type .fscrypt registered Jan 13 23:47:29.447069 kernel: Key type fscrypt-provisioning registered Jan 13 23:47:29.447077 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 23:47:29.447086 kernel: ima: Allocated hash algorithm: sha1 Jan 13 23:47:29.447094 kernel: ima: No architecture policies found Jan 13 23:47:29.447103 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 23:47:29.447111 kernel: clk: Disabling unused clocks Jan 13 23:47:29.447120 kernel: PM: genpd: Disabling unused power domains Jan 13 23:47:29.447128 kernel: Freeing unused kernel memory: 12480K Jan 13 23:47:29.447136 kernel: Run /init as init process Jan 13 23:47:29.447144 kernel: with arguments: Jan 13 23:47:29.447153 kernel: /init Jan 13 23:47:29.447162 kernel: with environment: Jan 13 23:47:29.447170 kernel: HOME=/ Jan 13 23:47:29.447178 kernel: TERM=linux Jan 13 23:47:29.447186 kernel: ACPI: bus type USB registered Jan 13 23:47:29.447195 kernel: usbcore: registered new interface driver usbfs Jan 13 23:47:29.447203 kernel: usbcore: registered new interface driver hub Jan 13 23:47:29.447211 kernel: usbcore: registered new device driver usb Jan 13 23:47:29.447309 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 23:47:29.447400 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 13 23:47:29.447486 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 13 23:47:29.447571 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 13 23:47:29.447657 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 13 23:47:29.447739 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 13 23:47:29.447857 kernel: hub 1-0:1.0: USB hub found Jan 13 23:47:29.447991 kernel: hub 1-0:1.0: 4 ports detected Jan 13 23:47:29.448101 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 13 23:47:29.448202 kernel: hub 2-0:1.0: USB hub found Jan 13 23:47:29.448292 kernel: hub 2-0:1.0: 4 ports detected Jan 13 23:47:29.448385 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 23:47:29.448471 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 13 23:47:29.448482 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 23:47:29.448491 kernel: GPT:25804799 != 104857599 Jan 13 23:47:29.448499 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 23:47:29.448508 kernel: GPT:25804799 != 104857599 Jan 13 23:47:29.448516 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 23:47:29.448526 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 23:47:29.448534 kernel: SCSI subsystem initialized Jan 13 23:47:29.448543 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 23:47:29.448551 kernel: device-mapper: uevent: version 1.0.3 Jan 13 23:47:29.448560 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 13 23:47:29.448569 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 13 23:47:29.448579 kernel: raid6: neonx8 gen() 15795 MB/s Jan 13 23:47:29.448587 kernel: raid6: neonx4 gen() 15736 MB/s Jan 13 23:47:29.448596 kernel: raid6: neonx2 gen() 13291 MB/s Jan 13 23:47:29.448604 kernel: raid6: neonx1 gen() 10485 MB/s Jan 13 23:47:29.448613 kernel: raid6: int64x8 gen() 6837 MB/s Jan 13 23:47:29.448621 kernel: raid6: int64x4 gen() 7360 MB/s Jan 13 23:47:29.448629 kernel: raid6: int64x2 gen() 6123 MB/s Jan 13 23:47:29.448638 kernel: raid6: int64x1 gen() 5063 MB/s Jan 13 23:47:29.448648 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Jan 13 23:47:29.448656 kernel: raid6: .... xor() 12069 MB/s, rmw enabled Jan 13 23:47:29.448665 kernel: raid6: using neon recovery algorithm Jan 13 23:47:29.448673 kernel: xor: measuring software checksum speed Jan 13 23:47:29.448684 kernel: 8regs : 21618 MB/sec Jan 13 23:47:29.448692 kernel: 32regs : 21704 MB/sec Jan 13 23:47:29.448702 kernel: arm64_neon : 27908 MB/sec Jan 13 23:47:29.448711 kernel: xor: using function: arm64_neon (27908 MB/sec) Jan 13 23:47:29.448719 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 23:47:29.448899 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 13 23:47:29.448917 kernel: BTRFS: device fsid 96ce121f-260d-446f-a0e2-a59fdf56d58c devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (274) Jan 13 23:47:29.448926 kernel: BTRFS info (device dm-0): first mount of filesystem 96ce121f-260d-446f-a0e2-a59fdf56d58c Jan 13 23:47:29.448935 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:47:29.448948 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 23:47:29.448975 kernel: BTRFS info (device dm-0): enabling free space tree Jan 13 23:47:29.448985 kernel: loop: module loaded Jan 13 23:47:29.448993 kernel: loop0: detected capacity change from 0 to 91840 Jan 13 23:47:29.449002 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 23:47:29.449120 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 13 23:47:29.449135 systemd[1]: Successfully made /usr/ read-only. Jan 13 23:47:29.449147 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:47:29.449156 systemd[1]: Detected virtualization kvm. Jan 13 23:47:29.449165 systemd[1]: Detected architecture arm64. Jan 13 23:47:29.449174 systemd[1]: Running in initrd. Jan 13 23:47:29.449183 systemd[1]: No hostname configured, using default hostname. Jan 13 23:47:29.449193 systemd[1]: Hostname set to . Jan 13 23:47:29.449202 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:47:29.449211 systemd[1]: Queued start job for default target initrd.target. Jan 13 23:47:29.449219 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:47:29.449228 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:47:29.449237 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:47:29.449248 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 23:47:29.449257 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:47:29.449267 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 23:47:29.449276 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 23:47:29.449285 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:47:29.449294 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:47:29.449304 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:47:29.449313 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:47:29.449322 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:47:29.449331 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:47:29.449339 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:47:29.449348 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:47:29.449357 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:47:29.449367 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:47:29.449376 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 23:47:29.449385 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 13 23:47:29.449394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:47:29.449403 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:47:29.449412 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:47:29.449423 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:47:29.449432 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 23:47:29.449441 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 23:47:29.449450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:47:29.449459 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 23:47:29.449468 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 13 23:47:29.449477 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 23:47:29.449487 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:47:29.449497 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:47:29.449506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:47:29.449515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:47:29.449526 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 23:47:29.449535 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 23:47:29.449544 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:47:29.449575 systemd-journald[417]: Collecting audit messages is enabled. Jan 13 23:47:29.449598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 23:47:29.449607 kernel: Bridge firewalling registered Jan 13 23:47:29.449616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:47:29.449625 kernel: audit: type=1130 audit(1768348049.382:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449634 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:47:29.449644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:47:29.449654 kernel: audit: type=1130 audit(1768348049.393:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449663 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:47:29.449672 kernel: audit: type=1130 audit(1768348049.403:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449682 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 23:47:29.449691 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:47:29.449701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:47:29.449710 kernel: audit: type=1130 audit(1768348049.423:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449719 kernel: audit: type=1334 audit(1768348049.424:6): prog-id=6 op=LOAD Jan 13 23:47:29.449729 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:47:29.449738 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:47:29.449747 kernel: audit: type=1130 audit(1768348049.431:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:47:29.449767 kernel: audit: type=1130 audit(1768348049.441:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.449776 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 23:47:29.449785 systemd-journald[417]: Journal started Jan 13 23:47:29.449804 systemd-journald[417]: Runtime Journal (/run/log/journal/750a967036e540d7b52431338ceb3add) is 8M, max 319.5M, 311.5M free. Jan 13 23:47:29.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.403000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.424000 audit: BPF prog-id=6 op=LOAD Jan 13 23:47:29.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.381058 systemd-modules-load[418]: Inserted module 'br_netfilter' Jan 13 23:47:29.458557 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:47:29.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.462988 kernel: audit: type=1130 audit(1768348049.458:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.463160 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:47:29.478331 dracut-cmdline[446]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=a2e92265a189403c21ae2a2ae9e6d4fed0782e0e430fbcb369a7bb0db156274f Jan 13 23:47:29.479643 systemd-resolved[434]: Positive Trust Anchors: Jan 13 23:47:29.479653 systemd-resolved[434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:47:29.479657 systemd-resolved[434]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:47:29.479687 systemd-resolved[434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:47:29.489270 systemd-tmpfiles[458]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 13 23:47:29.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.493661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:47:29.499304 kernel: audit: type=1130 audit(1768348049.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.508932 systemd-resolved[434]: Defaulting to hostname 'linux'. Jan 13 23:47:29.509763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:47:29.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.510780 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:47:29.565983 kernel: Loading iSCSI transport class v2.0-870. Jan 13 23:47:29.575993 kernel: iscsi: registered transport (tcp) Jan 13 23:47:29.590278 kernel: iscsi: registered transport (qla4xxx) Jan 13 23:47:29.590300 kernel: QLogic iSCSI HBA Driver Jan 13 23:47:29.612080 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:47:29.631477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:47:29.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.633500 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:47:29.678309 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 23:47:29.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.680570 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 23:47:29.681988 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 23:47:29.721282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:47:29.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.722000 audit: BPF prog-id=7 op=LOAD Jan 13 23:47:29.722000 audit: BPF prog-id=8 op=LOAD Jan 13 23:47:29.724239 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:47:29.757572 systemd-udevd[694]: Using default interface naming scheme 'v257'. Jan 13 23:47:29.765596 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:47:29.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.768583 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 23:47:29.790232 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:47:29.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.791000 audit: BPF prog-id=9 op=LOAD Jan 13 23:47:29.792465 dracut-pre-trigger[772]: rd.md=0: removing MD RAID activation Jan 13 23:47:29.792642 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:47:29.817665 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:47:29.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.819846 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:47:29.838144 systemd-networkd[812]: lo: Link UP Jan 13 23:47:29.838152 systemd-networkd[812]: lo: Gained carrier Jan 13 23:47:29.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.838748 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:47:29.839784 systemd[1]: Reached target network.target - Network. Jan 13 23:47:29.903208 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:47:29.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:29.905844 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 23:47:29.980290 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 13 23:47:29.980363 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 13 23:47:29.982974 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 13 23:47:29.991089 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 23:47:29.999842 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 23:47:30.007067 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 23:47:30.014723 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 23:47:30.016571 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 23:47:30.032641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:47:30.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:30.032719 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:47:30.033895 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:47:30.040819 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 13 23:47:30.041054 kernel: usbcore: registered new interface driver usbhid Jan 13 23:47:30.041067 kernel: usbhid: USB HID core driver Jan 13 23:47:30.041084 disk-uuid[881]: Primary Header is updated. Jan 13 23:47:30.041084 disk-uuid[881]: Secondary Entries is updated. Jan 13 23:47:30.041084 disk-uuid[881]: Secondary Header is updated. Jan 13 23:47:30.034394 systemd-networkd[812]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:47:30.034398 systemd-networkd[812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:47:30.034916 systemd-networkd[812]: eth0: Link UP Jan 13 23:47:30.035317 systemd-networkd[812]: eth0: Gained carrier Jan 13 23:47:30.035328 systemd-networkd[812]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:47:30.036635 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:47:30.067914 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:47:30.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:30.110028 systemd-networkd[812]: eth0: DHCPv4 address 10.0.15.225/25, gateway 10.0.15.129 acquired from 10.0.15.129 Jan 13 23:47:30.124025 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 23:47:30.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:30.125083 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:47:30.126624 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:47:30.128312 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:47:30.131049 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 23:47:30.167608 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:47:30.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:31.072201 disk-uuid[883]: Warning: The kernel is still using the old partition table. Jan 13 23:47:31.072201 disk-uuid[883]: The new table will be used at the next reboot or after you Jan 13 23:47:31.072201 disk-uuid[883]: run partprobe(8) or kpartx(8) Jan 13 23:47:31.072201 disk-uuid[883]: The operation has completed successfully. Jan 13 23:47:31.077558 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 23:47:31.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:31.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:31.077666 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 23:47:31.079652 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 23:47:31.122004 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (914) Jan 13 23:47:31.124125 kernel: BTRFS info (device vda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:47:31.124184 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:47:31.129568 kernel: BTRFS info (device vda6): turning on async discard Jan 13 23:47:31.129654 kernel: BTRFS info (device vda6): enabling free space tree Jan 13 23:47:31.134984 kernel: BTRFS info (device vda6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:47:31.135721 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 23:47:31.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:31.137821 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 23:47:31.276268 ignition[933]: Ignition 2.24.0 Jan 13 23:47:31.276286 ignition[933]: Stage: fetch-offline Jan 13 23:47:31.276322 ignition[933]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:31.278203 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:47:31.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:31.276331 ignition[933]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:31.276488 ignition[933]: parsed url from cmdline: "" Jan 13 23:47:31.281241 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 13 23:47:31.276491 ignition[933]: no config URL provided Jan 13 23:47:31.276496 ignition[933]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:47:31.276504 ignition[933]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:47:31.276509 ignition[933]: failed to fetch config: resource requires networking Jan 13 23:47:31.276654 ignition[933]: Ignition finished successfully Jan 13 23:47:31.313589 ignition[944]: Ignition 2.24.0 Jan 13 23:47:31.313612 ignition[944]: Stage: fetch Jan 13 23:47:31.313762 ignition[944]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:31.313770 ignition[944]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:31.313853 ignition[944]: parsed url from cmdline: "" Jan 13 23:47:31.313856 ignition[944]: no config URL provided Jan 13 23:47:31.313861 ignition[944]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 23:47:31.313866 ignition[944]: no config at "/usr/lib/ignition/user.ign" Jan 13 23:47:31.314273 ignition[944]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 23:47:31.314291 ignition[944]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 23:47:31.314535 ignition[944]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 13 23:47:32.045250 systemd-networkd[812]: eth0: Gained IPv6LL Jan 13 23:47:32.314792 ignition[944]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 13 23:47:32.314903 ignition[944]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 13 23:47:32.804052 ignition[944]: GET result: OK Jan 13 23:47:32.804169 ignition[944]: parsing config with SHA512: 045e9915f907972a8e8acca5031a8e55bb0ad4ed7b466ff37b3eba1f06257504cf6194a86a0588aa66bdee11c00adeb826658de44870f7415f29ea10bfe33e1b Jan 13 23:47:32.809571 unknown[944]: fetched base config from "system" Jan 13 23:47:32.809581 unknown[944]: fetched base config from "system" Jan 13 23:47:32.809901 ignition[944]: fetch: fetch complete Jan 13 23:47:32.809587 unknown[944]: fetched user config from "openstack" Jan 13 23:47:32.815875 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 13 23:47:32.815902 kernel: audit: type=1130 audit(1768348052.812:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.809906 ignition[944]: fetch: fetch passed Jan 13 23:47:32.811805 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 13 23:47:32.809946 ignition[944]: Ignition finished successfully Jan 13 23:47:32.814159 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 23:47:32.856263 ignition[952]: Ignition 2.24.0 Jan 13 23:47:32.856283 ignition[952]: Stage: kargs Jan 13 23:47:32.856430 ignition[952]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:32.856439 ignition[952]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:32.857169 ignition[952]: kargs: kargs passed Jan 13 23:47:32.857214 ignition[952]: Ignition finished successfully Jan 13 23:47:32.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.859450 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 23:47:32.863950 kernel: audit: type=1130 audit(1768348052.860:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.861186 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 23:47:32.896012 ignition[959]: Ignition 2.24.0 Jan 13 23:47:32.896028 ignition[959]: Stage: disks Jan 13 23:47:32.896176 ignition[959]: no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:32.896184 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:32.898897 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 23:47:32.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.903011 kernel: audit: type=1130 audit(1768348052.900:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.897025 ignition[959]: disks: disks passed Jan 13 23:47:32.900526 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 23:47:32.897074 ignition[959]: Ignition finished successfully Jan 13 23:47:32.904143 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 23:47:32.905801 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:47:32.907352 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:47:32.908583 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:47:32.911196 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 23:47:32.965003 systemd-fsck[968]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 13 23:47:32.967212 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 23:47:32.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:32.970622 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 23:47:32.972355 kernel: audit: type=1130 audit(1768348052.967:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.082985 kernel: EXT4-fs (vda9): mounted filesystem b1eb7e1a-01a1-41b0-9b3c-5a37b4853d4d r/w with ordered data mode. Quota mode: none. Jan 13 23:47:33.083513 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 23:47:33.084607 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 23:47:33.088082 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:47:33.089675 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 23:47:33.090537 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 23:47:33.091148 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 13 23:47:33.093839 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 23:47:33.093870 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:47:33.104003 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 23:47:33.107013 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 23:47:33.113976 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (976) Jan 13 23:47:33.117029 kernel: BTRFS info (device vda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:47:33.117071 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:47:33.125378 kernel: BTRFS info (device vda6): turning on async discard Jan 13 23:47:33.125432 kernel: BTRFS info (device vda6): enabling free space tree Jan 13 23:47:33.126509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:47:33.159162 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:33.276405 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 23:47:33.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.280995 kernel: audit: type=1130 audit(1768348053.277:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.281126 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 23:47:33.282514 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 23:47:33.304798 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 23:47:33.308968 kernel: BTRFS info (device vda6): last unmount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:47:33.327373 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 23:47:33.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.330791 ignition[1077]: INFO : Ignition 2.24.0 Jan 13 23:47:33.330791 ignition[1077]: INFO : Stage: mount Jan 13 23:47:33.330791 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:33.330791 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:33.330791 ignition[1077]: INFO : mount: mount passed Jan 13 23:47:33.337847 kernel: audit: type=1130 audit(1768348053.327:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.337876 kernel: audit: type=1130 audit(1768348053.332:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:33.332410 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 23:47:33.338697 ignition[1077]: INFO : Ignition finished successfully Jan 13 23:47:34.194021 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:36.203012 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:40.208036 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:40.212655 coreos-metadata[978]: Jan 13 23:47:40.212 WARN failed to locate config-drive, using the metadata service API instead Jan 13 23:47:40.231287 coreos-metadata[978]: Jan 13 23:47:40.231 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 23:47:40.372451 coreos-metadata[978]: Jan 13 23:47:40.372 INFO Fetch successful Jan 13 23:47:40.373438 coreos-metadata[978]: Jan 13 23:47:40.373 INFO wrote hostname ci-4578-0-0-p-89582bef9b to /sysroot/etc/hostname Jan 13 23:47:40.375271 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 13 23:47:40.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:40.376429 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 13 23:47:40.383646 kernel: audit: type=1130 audit(1768348060.376:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:40.383671 kernel: audit: type=1131 audit(1768348060.376:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:40.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:40.378608 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 23:47:40.408148 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 23:47:40.450976 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1094) Jan 13 23:47:40.454978 kernel: BTRFS info (device vda6): first mount of filesystem 43f26778-0bac-4551-a250-d0042cfe708e Jan 13 23:47:40.455010 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 23:47:40.459205 kernel: BTRFS info (device vda6): turning on async discard Jan 13 23:47:40.459269 kernel: BTRFS info (device vda6): enabling free space tree Jan 13 23:47:40.460714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 23:47:40.490942 ignition[1112]: INFO : Ignition 2.24.0 Jan 13 23:47:40.490942 ignition[1112]: INFO : Stage: files Jan 13 23:47:40.492583 ignition[1112]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:40.492583 ignition[1112]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:40.492583 ignition[1112]: DEBUG : files: compiled without relabeling support, skipping Jan 13 23:47:40.495843 ignition[1112]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 23:47:40.495843 ignition[1112]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 23:47:40.498817 ignition[1112]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 23:47:40.500269 ignition[1112]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 23:47:40.500269 ignition[1112]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 23:47:40.499822 unknown[1112]: wrote ssh authorized keys file for user: core Jan 13 23:47:40.503877 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:47:40.503877 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 13 23:47:40.559817 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 13 23:47:40.666548 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 13 23:47:40.666548 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:47:40.669615 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:47:40.679166 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 13 23:47:40.939240 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 13 23:47:41.709276 ignition[1112]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 13 23:47:41.709276 ignition[1112]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 13 23:47:41.712565 ignition[1112]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:47:41.714217 ignition[1112]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 23:47:41.714217 ignition[1112]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 13 23:47:41.714217 ignition[1112]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 13 23:47:41.714217 ignition[1112]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 23:47:41.714217 ignition[1112]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:47:41.726260 kernel: audit: type=1130 audit(1768348061.717:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.726353 ignition[1112]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 23:47:41.726353 ignition[1112]: INFO : files: files passed Jan 13 23:47:41.726353 ignition[1112]: INFO : Ignition finished successfully Jan 13 23:47:41.717387 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 23:47:41.719721 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 23:47:41.734626 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 23:47:41.737667 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 23:47:41.737792 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 23:47:41.744656 kernel: audit: type=1130 audit(1768348061.739:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.744682 kernel: audit: type=1131 audit(1768348061.739:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.771522 initrd-setup-root-after-ignition[1146]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:47:41.771522 initrd-setup-root-after-ignition[1146]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:47:41.774060 initrd-setup-root-after-ignition[1150]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 23:47:41.774098 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:47:41.779241 kernel: audit: type=1130 audit(1768348061.775:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.776273 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 23:47:41.780869 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 23:47:41.837133 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 23:47:41.837257 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 23:47:41.843664 kernel: audit: type=1130 audit(1768348061.838:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.843690 kernel: audit: type=1131 audit(1768348061.838:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.839148 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 23:47:41.844449 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 23:47:41.846057 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 23:47:41.847026 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 23:47:41.873920 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:47:41.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.876117 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 23:47:41.879426 kernel: audit: type=1130 audit(1768348061.873:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.898342 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 13 23:47:41.898557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:47:41.900350 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:47:41.901993 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 23:47:41.903952 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 23:47:41.908032 kernel: audit: type=1131 audit(1768348061.905:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.904096 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 23:47:41.908342 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 23:47:41.909781 systemd[1]: Stopped target basic.target - Basic System. Jan 13 23:47:41.911106 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 23:47:41.912434 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 23:47:41.913889 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 23:47:41.915596 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 13 23:47:41.917061 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 23:47:41.918530 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 23:47:41.920021 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 23:47:41.921744 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 23:47:41.923355 systemd[1]: Stopped target swap.target - Swaps. Jan 13 23:47:41.924588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 23:47:41.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.924716 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 23:47:41.926581 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:47:41.928100 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:47:41.929622 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 23:47:41.929701 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:47:41.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.931316 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 23:47:41.931437 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 23:47:41.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.933773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 23:47:41.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.933900 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 23:47:41.935492 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 23:47:41.935597 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 23:47:41.937679 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 23:47:41.939925 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 23:47:41.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.941248 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 23:47:41.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.941361 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:47:41.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.942998 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 23:47:41.943107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:47:41.944489 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 23:47:41.944592 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 23:47:41.949707 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 23:47:41.954126 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 23:47:41.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.965038 ignition[1170]: INFO : Ignition 2.24.0 Jan 13 23:47:41.965038 ignition[1170]: INFO : Stage: umount Jan 13 23:47:41.967702 ignition[1170]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 23:47:41.967702 ignition[1170]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 13 23:47:41.967702 ignition[1170]: INFO : umount: umount passed Jan 13 23:47:41.967702 ignition[1170]: INFO : Ignition finished successfully Jan 13 23:47:41.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.967385 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 23:47:41.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.967511 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 23:47:41.969815 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 23:47:41.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.970264 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 23:47:41.970326 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 23:47:41.972088 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 23:47:41.972131 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 23:47:41.973519 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 13 23:47:41.973561 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 13 23:47:41.974800 systemd[1]: Stopped target network.target - Network. Jan 13 23:47:41.975993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 23:47:41.976038 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 23:47:41.977979 systemd[1]: Stopped target paths.target - Path Units. Jan 13 23:47:41.979208 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 23:47:41.985038 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:47:41.986603 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 23:47:41.988651 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 23:47:41.989902 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 23:47:41.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.989943 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 23:47:41.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:41.991694 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 23:47:41.991726 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 23:47:41.993432 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 13 23:47:41.993452 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:47:41.994840 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 23:47:41.994895 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 23:47:41.996631 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 23:47:41.996673 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 23:47:41.998071 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 23:47:42.000010 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 23:47:42.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.009785 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 23:47:42.009906 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 23:47:42.012000 audit: BPF prog-id=6 op=UNLOAD Jan 13 23:47:42.012897 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 23:47:42.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.013064 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 23:47:42.016725 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 13 23:47:42.017000 audit: BPF prog-id=9 op=UNLOAD Jan 13 23:47:42.018223 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 23:47:42.018265 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:47:42.020601 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 23:47:42.021906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 23:47:42.022025 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 23:47:42.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.024320 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 23:47:42.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.024376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:47:42.025805 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 23:47:42.025845 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 23:47:42.028137 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:47:42.042384 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 23:47:42.042533 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:47:42.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.044443 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 23:47:42.044479 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 23:47:42.046155 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 23:47:42.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.046182 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:47:42.047741 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 23:47:42.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.047793 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 23:47:42.050985 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 23:47:42.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.051032 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 23:47:42.053718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 23:47:42.053771 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 23:47:42.056798 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 23:47:42.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.058200 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 13 23:47:42.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.058252 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:47:42.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.059942 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 23:47:42.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.059998 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:47:42.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.062222 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 23:47:42.062268 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:47:42.063811 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 23:47:42.063850 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:47:42.065817 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 23:47:42.065858 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:47:42.077831 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 23:47:42.077948 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 23:47:42.080150 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 23:47:42.080254 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 23:47:42.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.083272 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 23:47:42.083398 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 23:47:42.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.085061 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 23:47:42.086532 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 23:47:42.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.087810 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 23:47:42.089549 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 23:47:42.107753 systemd[1]: Switching root. Jan 13 23:47:42.136856 systemd-journald[417]: Journal stopped Jan 13 23:47:42.979002 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Jan 13 23:47:42.979091 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 23:47:42.979109 kernel: SELinux: policy capability open_perms=1 Jan 13 23:47:42.979127 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 23:47:42.979140 kernel: SELinux: policy capability always_check_network=0 Jan 13 23:47:42.979153 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 23:47:42.979163 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 23:47:42.979172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 23:47:42.979182 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 23:47:42.979195 kernel: SELinux: policy capability userspace_initial_context=0 Jan 13 23:47:42.979205 systemd[1]: Successfully loaded SELinux policy in 67.198ms. Jan 13 23:47:42.979228 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.719ms. Jan 13 23:47:42.979242 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 13 23:47:42.979253 systemd[1]: Detected virtualization kvm. Jan 13 23:47:42.979264 systemd[1]: Detected architecture arm64. Jan 13 23:47:42.979275 systemd[1]: Detected first boot. Jan 13 23:47:42.979286 systemd[1]: Hostname set to . Jan 13 23:47:42.979296 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 13 23:47:42.979308 zram_generator::config[1214]: No configuration found. Jan 13 23:47:42.979325 kernel: NET: Registered PF_VSOCK protocol family Jan 13 23:47:42.979338 systemd[1]: Populated /etc with preset unit settings. Jan 13 23:47:42.979349 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 23:47:42.979359 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 23:47:42.979370 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 23:47:42.979384 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 23:47:42.979395 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 23:47:42.979406 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 23:47:42.979416 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 23:47:42.979429 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 23:47:42.979440 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 23:47:42.979451 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 23:47:42.979463 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 23:47:42.979473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 23:47:42.979487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 23:47:42.979499 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 23:47:42.979510 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 23:47:42.979521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 23:47:42.979532 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 23:47:42.979544 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 23:47:42.979554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 23:47:42.979566 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 23:47:42.979576 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 23:47:42.979587 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 23:47:42.979599 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 23:47:42.979610 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 23:47:42.979621 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 23:47:42.979632 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 23:47:42.979646 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 13 23:47:42.979657 systemd[1]: Reached target slices.target - Slice Units. Jan 13 23:47:42.979668 systemd[1]: Reached target swap.target - Swaps. Jan 13 23:47:42.979691 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 23:47:42.979702 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 23:47:42.979713 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 13 23:47:42.979724 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 13 23:47:42.979737 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 13 23:47:42.979748 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 23:47:42.979760 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 13 23:47:42.979772 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 13 23:47:42.979783 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 23:47:42.979794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 23:47:42.979805 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 23:47:42.979817 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 23:47:42.979828 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 23:47:42.979839 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 23:47:42.979851 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 23:47:42.979862 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 23:47:42.979873 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 23:47:42.979883 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 23:47:42.979894 systemd[1]: Reached target machines.target - Containers. Jan 13 23:47:42.979905 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 23:47:42.979915 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:47:42.979928 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 23:47:42.979939 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 23:47:42.979950 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:47:42.979980 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:47:42.979994 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:47:42.980005 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 23:47:42.980016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:47:42.980027 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 23:47:42.980037 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 23:47:42.980048 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 23:47:42.980059 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 23:47:42.980070 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 23:47:42.980082 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:47:42.980094 kernel: fuse: init (API version 7.41) Jan 13 23:47:42.980105 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 23:47:42.980116 kernel: ACPI: bus type drm_connector registered Jan 13 23:47:42.980126 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 23:47:42.980137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 23:47:42.980149 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 23:47:42.980160 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 13 23:47:42.980171 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 23:47:42.980182 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 23:47:42.980219 systemd-journald[1283]: Collecting audit messages is enabled. Jan 13 23:47:42.980243 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 23:47:42.980255 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 23:47:42.980266 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 23:47:42.980277 systemd-journald[1283]: Journal started Jan 13 23:47:42.980300 systemd-journald[1283]: Runtime Journal (/run/log/journal/750a967036e540d7b52431338ceb3add) is 8M, max 319.5M, 311.5M free. Jan 13 23:47:42.849000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 13 23:47:42.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.940000 audit: BPF prog-id=14 op=UNLOAD Jan 13 23:47:42.940000 audit: BPF prog-id=13 op=UNLOAD Jan 13 23:47:42.940000 audit: BPF prog-id=15 op=LOAD Jan 13 23:47:42.941000 audit: BPF prog-id=16 op=LOAD Jan 13 23:47:42.941000 audit: BPF prog-id=17 op=LOAD Jan 13 23:47:42.974000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 13 23:47:42.974000 audit[1283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe4aff5e0 a2=4000 a3=0 items=0 ppid=1 pid=1283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:42.974000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 13 23:47:42.764661 systemd[1]: Queued start job for default target multi-user.target. Jan 13 23:47:42.787303 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 23:47:42.787718 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 23:47:42.983988 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 23:47:42.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.984709 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 23:47:42.985808 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 23:47:42.989137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 23:47:42.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.990642 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 23:47:42.990800 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 23:47:42.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.992275 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:47:42.992435 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:47:42.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.993725 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:47:42.993881 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:47:42.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.995356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:47:42.995508 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:47:42.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.996943 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 23:47:42.998044 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 23:47:42.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:42.999861 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 23:47:42.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.001187 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:47:43.001418 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:47:43.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.002772 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 23:47:43.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.005264 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 23:47:43.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.007372 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 23:47:43.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.008783 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 13 23:47:43.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.021046 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 23:47:43.022838 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 13 23:47:43.025087 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 23:47:43.026897 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 23:47:43.027906 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 23:47:43.027942 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 23:47:43.029550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 13 23:47:43.030792 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:47:43.030899 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:47:43.035687 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 23:47:43.039047 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 23:47:43.039936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:47:43.040940 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 23:47:43.041826 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:47:43.043108 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 23:47:43.046773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 23:47:43.049253 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 23:47:43.051599 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 23:47:43.053269 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 23:47:43.055136 systemd-journald[1283]: Time spent on flushing to /var/log/journal/750a967036e540d7b52431338ceb3add is 36.843ms for 1818 entries. Jan 13 23:47:43.055136 systemd-journald[1283]: System Journal (/var/log/journal/750a967036e540d7b52431338ceb3add) is 8M, max 588.1M, 580.1M free. Jan 13 23:47:43.097788 systemd-journald[1283]: Received client request to flush runtime journal. Jan 13 23:47:43.097837 kernel: loop1: detected capacity change from 0 to 45344 Jan 13 23:47:43.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.061863 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 23:47:43.066457 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 23:47:43.068901 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 13 23:47:43.078745 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 23:47:43.089991 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 23:47:43.092765 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 13 23:47:43.092778 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Jan 13 23:47:43.099331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 23:47:43.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.101413 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 23:47:43.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.107661 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 23:47:43.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.123202 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 13 23:47:43.132983 kernel: loop2: detected capacity change from 0 to 207008 Jan 13 23:47:43.147858 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 23:47:43.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.149000 audit: BPF prog-id=18 op=LOAD Jan 13 23:47:43.149000 audit: BPF prog-id=19 op=LOAD Jan 13 23:47:43.149000 audit: BPF prog-id=20 op=LOAD Jan 13 23:47:43.152000 audit: BPF prog-id=21 op=LOAD Jan 13 23:47:43.151435 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 13 23:47:43.155177 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 23:47:43.156934 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 23:47:43.159000 audit: BPF prog-id=22 op=LOAD Jan 13 23:47:43.159000 audit: BPF prog-id=23 op=LOAD Jan 13 23:47:43.159000 audit: BPF prog-id=24 op=LOAD Jan 13 23:47:43.162126 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 13 23:47:43.164000 audit: BPF prog-id=25 op=LOAD Jan 13 23:47:43.164000 audit: BPF prog-id=26 op=LOAD Jan 13 23:47:43.164000 audit: BPF prog-id=27 op=LOAD Jan 13 23:47:43.167204 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 23:47:43.174070 kernel: loop3: detected capacity change from 0 to 1648 Jan 13 23:47:43.179601 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 13 23:47:43.179626 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Jan 13 23:47:43.195142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 23:47:43.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.197887 systemd-nsresourced[1360]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 13 23:47:43.198875 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 13 23:47:43.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.201979 kernel: loop4: detected capacity change from 0 to 100192 Jan 13 23:47:43.222295 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 23:47:43.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.237998 kernel: loop5: detected capacity change from 0 to 45344 Jan 13 23:47:43.250018 kernel: loop6: detected capacity change from 0 to 207008 Jan 13 23:47:43.263677 systemd-oomd[1356]: No swap; memory pressure usage will be degraded Jan 13 23:47:43.264166 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 13 23:47:43.268061 kernel: loop7: detected capacity change from 0 to 1648 Jan 13 23:47:43.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.271520 systemd-resolved[1358]: Positive Trust Anchors: Jan 13 23:47:43.271540 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 23:47:43.271543 systemd-resolved[1358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 13 23:47:43.271574 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 23:47:43.273987 kernel: loop1: detected capacity change from 0 to 100192 Jan 13 23:47:43.279304 systemd-resolved[1358]: Using system hostname 'ci-4578-0-0-p-89582bef9b'. Jan 13 23:47:43.280624 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 23:47:43.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.281903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 23:47:43.284531 (sd-merge)[1381]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 13 23:47:43.287813 (sd-merge)[1381]: Merged extensions into '/usr'. Jan 13 23:47:43.291701 systemd[1]: Reload requested from client PID 1334 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 23:47:43.291723 systemd[1]: Reloading... Jan 13 23:47:43.348990 zram_generator::config[1412]: No configuration found. Jan 13 23:47:43.492256 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 23:47:43.492670 systemd[1]: Reloading finished in 200 ms. Jan 13 23:47:43.535204 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 23:47:43.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.537096 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 23:47:43.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.551409 systemd[1]: Starting ensure-sysext.service... Jan 13 23:47:43.552000 audit: BPF prog-id=8 op=UNLOAD Jan 13 23:47:43.552000 audit: BPF prog-id=7 op=UNLOAD Jan 13 23:47:43.552972 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 23:47:43.553000 audit: BPF prog-id=28 op=LOAD Jan 13 23:47:43.553000 audit: BPF prog-id=29 op=LOAD Jan 13 23:47:43.555295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 23:47:43.556000 audit: BPF prog-id=30 op=LOAD Jan 13 23:47:43.556000 audit: BPF prog-id=18 op=UNLOAD Jan 13 23:47:43.556000 audit: BPF prog-id=31 op=LOAD Jan 13 23:47:43.556000 audit: BPF prog-id=32 op=LOAD Jan 13 23:47:43.556000 audit: BPF prog-id=19 op=UNLOAD Jan 13 23:47:43.556000 audit: BPF prog-id=20 op=UNLOAD Jan 13 23:47:43.556000 audit: BPF prog-id=33 op=LOAD Jan 13 23:47:43.556000 audit: BPF prog-id=21 op=UNLOAD Jan 13 23:47:43.557000 audit: BPF prog-id=34 op=LOAD Jan 13 23:47:43.557000 audit: BPF prog-id=25 op=UNLOAD Jan 13 23:47:43.557000 audit: BPF prog-id=35 op=LOAD Jan 13 23:47:43.557000 audit: BPF prog-id=36 op=LOAD Jan 13 23:47:43.557000 audit: BPF prog-id=26 op=UNLOAD Jan 13 23:47:43.557000 audit: BPF prog-id=27 op=UNLOAD Jan 13 23:47:43.557000 audit: BPF prog-id=37 op=LOAD Jan 13 23:47:43.557000 audit: BPF prog-id=22 op=UNLOAD Jan 13 23:47:43.558000 audit: BPF prog-id=38 op=LOAD Jan 13 23:47:43.558000 audit: BPF prog-id=39 op=LOAD Jan 13 23:47:43.558000 audit: BPF prog-id=23 op=UNLOAD Jan 13 23:47:43.558000 audit: BPF prog-id=24 op=UNLOAD Jan 13 23:47:43.559000 audit: BPF prog-id=40 op=LOAD Jan 13 23:47:43.559000 audit: BPF prog-id=15 op=UNLOAD Jan 13 23:47:43.559000 audit: BPF prog-id=41 op=LOAD Jan 13 23:47:43.559000 audit: BPF prog-id=42 op=LOAD Jan 13 23:47:43.559000 audit: BPF prog-id=16 op=UNLOAD Jan 13 23:47:43.559000 audit: BPF prog-id=17 op=UNLOAD Jan 13 23:47:43.564225 systemd[1]: Reload requested from client PID 1448 ('systemctl') (unit ensure-sysext.service)... Jan 13 23:47:43.564240 systemd[1]: Reloading... Jan 13 23:47:43.569361 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 13 23:47:43.569388 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 13 23:47:43.569593 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 23:47:43.570527 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Jan 13 23:47:43.570579 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Jan 13 23:47:43.578337 systemd-tmpfiles[1449]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:47:43.578352 systemd-tmpfiles[1449]: Skipping /boot Jan 13 23:47:43.581143 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Jan 13 23:47:43.585949 systemd-tmpfiles[1449]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 23:47:43.586002 systemd-tmpfiles[1449]: Skipping /boot Jan 13 23:47:43.623999 zram_generator::config[1482]: No configuration found. Jan 13 23:47:43.731994 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 23:47:43.805056 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 13 23:47:43.805156 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 13 23:47:43.805174 kernel: [drm] features: -context_init Jan 13 23:47:43.809978 kernel: [drm] number of scanouts: 1 Jan 13 23:47:43.810059 kernel: [drm] number of cap sets: 0 Jan 13 23:47:43.814986 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 13 23:47:43.818700 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 23:47:43.818976 kernel: Console: switching to colour frame buffer device 160x50 Jan 13 23:47:43.841984 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 13 23:47:43.843596 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 23:47:43.843799 systemd[1]: Reloading finished in 279 ms. Jan 13 23:47:43.854041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 23:47:43.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.856000 audit: BPF prog-id=43 op=LOAD Jan 13 23:47:43.856000 audit: BPF prog-id=30 op=UNLOAD Jan 13 23:47:43.856000 audit: BPF prog-id=44 op=LOAD Jan 13 23:47:43.856000 audit: BPF prog-id=45 op=LOAD Jan 13 23:47:43.856000 audit: BPF prog-id=31 op=UNLOAD Jan 13 23:47:43.856000 audit: BPF prog-id=32 op=UNLOAD Jan 13 23:47:43.857000 audit: BPF prog-id=46 op=LOAD Jan 13 23:47:43.857000 audit: BPF prog-id=37 op=UNLOAD Jan 13 23:47:43.857000 audit: BPF prog-id=47 op=LOAD Jan 13 23:47:43.857000 audit: BPF prog-id=48 op=LOAD Jan 13 23:47:43.857000 audit: BPF prog-id=38 op=UNLOAD Jan 13 23:47:43.857000 audit: BPF prog-id=39 op=UNLOAD Jan 13 23:47:43.858000 audit: BPF prog-id=49 op=LOAD Jan 13 23:47:43.858000 audit: BPF prog-id=34 op=UNLOAD Jan 13 23:47:43.858000 audit: BPF prog-id=50 op=LOAD Jan 13 23:47:43.858000 audit: BPF prog-id=51 op=LOAD Jan 13 23:47:43.858000 audit: BPF prog-id=35 op=UNLOAD Jan 13 23:47:43.858000 audit: BPF prog-id=36 op=UNLOAD Jan 13 23:47:43.859000 audit: BPF prog-id=52 op=LOAD Jan 13 23:47:43.859000 audit: BPF prog-id=40 op=UNLOAD Jan 13 23:47:43.859000 audit: BPF prog-id=53 op=LOAD Jan 13 23:47:43.859000 audit: BPF prog-id=54 op=LOAD Jan 13 23:47:43.859000 audit: BPF prog-id=41 op=UNLOAD Jan 13 23:47:43.859000 audit: BPF prog-id=42 op=UNLOAD Jan 13 23:47:43.860000 audit: BPF prog-id=55 op=LOAD Jan 13 23:47:43.860000 audit: BPF prog-id=33 op=UNLOAD Jan 13 23:47:43.860000 audit: BPF prog-id=56 op=LOAD Jan 13 23:47:43.860000 audit: BPF prog-id=57 op=LOAD Jan 13 23:47:43.860000 audit: BPF prog-id=28 op=UNLOAD Jan 13 23:47:43.860000 audit: BPF prog-id=29 op=UNLOAD Jan 13 23:47:43.866620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 23:47:43.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.892993 systemd[1]: Finished ensure-sysext.service. Jan 13 23:47:43.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.908414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:47:43.910157 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 23:47:43.911159 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 23:47:43.912124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 23:47:43.919135 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 23:47:43.920850 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 23:47:43.923215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 23:47:43.928190 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 13 23:47:43.929279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 23:47:43.929389 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 13 23:47:43.931013 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 23:47:43.933193 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 23:47:43.934118 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 13 23:47:43.935661 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 23:47:43.938000 audit: BPF prog-id=58 op=LOAD Jan 13 23:47:43.941202 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 23:47:43.942346 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 23:47:43.945579 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 13 23:47:43.945639 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 13 23:47:43.948125 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 23:47:43.951011 kernel: PTP clock support registered Jan 13 23:47:43.952209 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 23:47:43.954743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 23:47:43.960327 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 23:47:43.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.961684 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 23:47:43.963018 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 23:47:43.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.964156 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 23:47:43.964329 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 23:47:43.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.967292 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 23:47:43.967641 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 23:47:43.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.969301 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 13 23:47:43.969787 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 13 23:47:43.969000 audit[1588]: SYSTEM_BOOT pid=1588 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.973027 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 23:47:43.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.981461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 23:47:43.981591 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 23:47:43.990280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 23:47:43.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:47:43.993010 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 23:47:44.002000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 13 23:47:44.002000 audit[1615]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc2c7e290 a2=420 a3=0 items=0 ppid=1570 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:47:44.002000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:47:44.004591 augenrules[1615]: No rules Jan 13 23:47:44.008373 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:47:44.008617 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:47:44.032264 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 23:47:44.033557 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 23:47:44.036180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 23:47:44.041029 systemd-networkd[1586]: lo: Link UP Jan 13 23:47:44.041038 systemd-networkd[1586]: lo: Gained carrier Jan 13 23:47:44.042435 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 23:47:44.042802 systemd-networkd[1586]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:47:44.042814 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 23:47:44.043550 systemd[1]: Reached target network.target - Network. Jan 13 23:47:44.044113 systemd-networkd[1586]: eth0: Link UP Jan 13 23:47:44.044590 systemd-networkd[1586]: eth0: Gained carrier Jan 13 23:47:44.044614 systemd-networkd[1586]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 13 23:47:44.045539 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 13 23:47:44.047648 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 23:47:44.061126 systemd-networkd[1586]: eth0: DHCPv4 address 10.0.15.225/25, gateway 10.0.15.129 acquired from 10.0.15.129 Jan 13 23:47:44.065315 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 13 23:47:44.420807 ldconfig[1578]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 23:47:44.426230 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 23:47:44.429973 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 23:47:44.454438 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 23:47:44.455603 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 23:47:44.457333 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 23:47:44.458622 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 23:47:44.459807 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 23:47:44.460828 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 23:47:44.461967 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 13 23:47:44.463062 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 13 23:47:44.463915 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 23:47:44.465010 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 23:47:44.465046 systemd[1]: Reached target paths.target - Path Units. Jan 13 23:47:44.465711 systemd[1]: Reached target timers.target - Timer Units. Jan 13 23:47:44.467936 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 23:47:44.470096 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 23:47:44.472657 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 13 23:47:44.473900 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 13 23:47:44.475022 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 13 23:47:44.477857 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 23:47:44.479082 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 13 23:47:44.480564 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 23:47:44.481522 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 23:47:44.482299 systemd[1]: Reached target basic.target - Basic System. Jan 13 23:47:44.483035 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:47:44.483069 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 23:47:44.485467 systemd[1]: Starting chronyd.service - NTP client/server... Jan 13 23:47:44.487045 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 23:47:44.488930 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 13 23:47:44.492102 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 23:47:44.493709 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 23:47:44.496988 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:44.497137 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 23:47:44.498891 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 23:47:44.499764 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 23:47:44.501292 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 23:47:44.504306 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 23:47:44.510299 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 23:47:44.513135 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 23:47:44.513473 jq[1642]: false Jan 13 23:47:44.521338 extend-filesystems[1645]: Found /dev/vda6 Jan 13 23:47:44.524090 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 23:47:44.523390 chronyd[1637]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 13 23:47:44.525004 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 23:47:44.525521 chronyd[1637]: Loaded seccomp filter (level 2) Jan 13 23:47:44.525627 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 23:47:44.526498 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 23:47:44.527316 extend-filesystems[1645]: Found /dev/vda9 Jan 13 23:47:44.530082 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 23:47:44.531843 systemd[1]: Started chronyd.service - NTP client/server. Jan 13 23:47:44.533604 extend-filesystems[1645]: Checking size of /dev/vda9 Jan 13 23:47:44.541537 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 23:47:44.543094 jq[1661]: true Jan 13 23:47:44.544433 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 23:47:44.544664 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 23:47:44.545046 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 23:47:44.545250 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 23:47:44.547685 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 23:47:44.548063 extend-filesystems[1645]: Resized partition /dev/vda9 Jan 13 23:47:44.548138 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 23:47:44.553134 extend-filesystems[1674]: resize2fs 1.47.3 (8-Jul-2025) Jan 13 23:47:44.563996 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 13 23:47:44.583801 update_engine[1659]: I20260113 23:47:44.581401 1659 main.cc:92] Flatcar Update Engine starting Jan 13 23:47:44.591766 tar[1673]: linux-arm64/LICENSE Jan 13 23:47:44.592046 tar[1673]: linux-arm64/helm Jan 13 23:47:44.600162 jq[1675]: true Jan 13 23:47:44.605368 dbus-daemon[1640]: [system] SELinux support is enabled Jan 13 23:47:44.607341 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 23:47:44.611164 update_engine[1659]: I20260113 23:47:44.611101 1659 update_check_scheduler.cc:74] Next update check in 10m32s Jan 13 23:47:44.611875 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 23:47:44.611907 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 23:47:44.613240 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 23:47:44.613266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 23:47:44.614337 systemd[1]: Started update-engine.service - Update Engine. Jan 13 23:47:44.618379 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 23:47:44.660017 systemd-logind[1653]: New seat seat0. Jan 13 23:47:44.663038 locksmithd[1695]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 23:47:44.664226 systemd-logind[1653]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 23:47:44.664252 systemd-logind[1653]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 13 23:47:44.667305 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 23:47:44.670869 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 23:47:44.731017 bash[1722]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:47:44.737444 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 23:47:44.743025 containerd[1676]: time="2026-01-13T23:47:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 13 23:47:44.743025 containerd[1676]: time="2026-01-13T23:47:44.742241440Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 13 23:47:44.741300 systemd[1]: Starting sshkeys.service... Jan 13 23:47:44.754626 containerd[1676]: time="2026-01-13T23:47:44.754426120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.56µs" Jan 13 23:47:44.754626 containerd[1676]: time="2026-01-13T23:47:44.754461480Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 13 23:47:44.754626 containerd[1676]: time="2026-01-13T23:47:44.754505560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 13 23:47:44.754626 containerd[1676]: time="2026-01-13T23:47:44.754518120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 13 23:47:44.754757 containerd[1676]: time="2026-01-13T23:47:44.754655160Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 13 23:47:44.754757 containerd[1676]: time="2026-01-13T23:47:44.754670920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:47:44.754757 containerd[1676]: time="2026-01-13T23:47:44.754717680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 13 23:47:44.754757 containerd[1676]: time="2026-01-13T23:47:44.754728840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755326 containerd[1676]: time="2026-01-13T23:47:44.755049240Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755326 containerd[1676]: time="2026-01-13T23:47:44.755072560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755326 containerd[1676]: time="2026-01-13T23:47:44.755084080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755326 containerd[1676]: time="2026-01-13T23:47:44.755092000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755661 containerd[1676]: time="2026-01-13T23:47:44.755500000Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755661 containerd[1676]: time="2026-01-13T23:47:44.755524720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755661 containerd[1676]: time="2026-01-13T23:47:44.755611200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755879 containerd[1676]: time="2026-01-13T23:47:44.755765600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755879 containerd[1676]: time="2026-01-13T23:47:44.755789520Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 13 23:47:44.755879 containerd[1676]: time="2026-01-13T23:47:44.755799640Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 13 23:47:44.755879 containerd[1676]: time="2026-01-13T23:47:44.755852360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 13 23:47:44.756116 containerd[1676]: time="2026-01-13T23:47:44.756095680Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 13 23:47:44.756196 containerd[1676]: time="2026-01-13T23:47:44.756175640Z" level=info msg="metadata content store policy set" policy=shared Jan 13 23:47:44.763894 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 13 23:47:44.766935 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 13 23:47:44.786999 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:44.795266 containerd[1676]: time="2026-01-13T23:47:44.795222240Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 13 23:47:44.795574 containerd[1676]: time="2026-01-13T23:47:44.795543880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:47:44.795716 containerd[1676]: time="2026-01-13T23:47:44.795697680Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 13 23:47:44.795782 containerd[1676]: time="2026-01-13T23:47:44.795769880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 13 23:47:44.795857 containerd[1676]: time="2026-01-13T23:47:44.795834960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796291680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796319960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796330840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796343880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796355360Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796368840Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796380400Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796391920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796479160Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796625520Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796646240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796662160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796673160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 13 23:47:44.797420 containerd[1676]: time="2026-01-13T23:47:44.796683440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796692800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796704360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796724680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796735600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796746200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796755600Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796782120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796820440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796833920Z" level=info msg="Start snapshots syncer" Jan 13 23:47:44.797829 containerd[1676]: time="2026-01-13T23:47:44.796871880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 13 23:47:44.798158 containerd[1676]: time="2026-01-13T23:47:44.797146160Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 13 23:47:44.798158 containerd[1676]: time="2026-01-13T23:47:44.797195720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797248760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797342440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797368240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797381360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797391440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797402160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797553080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797576040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797586680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797598320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797631240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797644680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 13 23:47:44.798283 containerd[1676]: time="2026-01-13T23:47:44.797652920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797661840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797669760Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797678960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797830560Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797945560Z" level=info msg="runtime interface created" Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.797952960Z" level=info msg="created NRI interface" Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.798104400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.798121320Z" level=info msg="Connect containerd service" Jan 13 23:47:44.798490 containerd[1676]: time="2026-01-13T23:47:44.798147480Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 23:47:44.799568 containerd[1676]: time="2026-01-13T23:47:44.799534800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:47:44.894640 containerd[1676]: time="2026-01-13T23:47:44.894511120Z" level=info msg="Start subscribing containerd event" Jan 13 23:47:44.894640 containerd[1676]: time="2026-01-13T23:47:44.894588280Z" level=info msg="Start recovering state" Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894691840Z" level=info msg="Start event monitor" Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894704240Z" level=info msg="Start cni network conf syncer for default" Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894716480Z" level=info msg="Start streaming server" Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894727520Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894735960Z" level=info msg="runtime interface starting up..." Jan 13 23:47:44.894750 containerd[1676]: time="2026-01-13T23:47:44.894744800Z" level=info msg="starting plugins..." Jan 13 23:47:44.894863 containerd[1676]: time="2026-01-13T23:47:44.894757440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 13 23:47:44.897303 containerd[1676]: time="2026-01-13T23:47:44.895893400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 23:47:44.897303 containerd[1676]: time="2026-01-13T23:47:44.896060160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 23:47:44.897303 containerd[1676]: time="2026-01-13T23:47:44.896127240Z" level=info msg="containerd successfully booted in 0.155519s" Jan 13 23:47:44.896299 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 23:47:44.903981 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 13 23:47:44.933173 extend-filesystems[1674]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 23:47:44.933173 extend-filesystems[1674]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 13 23:47:44.933173 extend-filesystems[1674]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 13 23:47:44.935911 extend-filesystems[1645]: Resized filesystem in /dev/vda9 Jan 13 23:47:44.936399 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 23:47:44.936782 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 23:47:44.981911 sshd_keygen[1668]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 23:47:44.983188 tar[1673]: linux-arm64/README.md Jan 13 23:47:45.007142 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 23:47:45.010147 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 23:47:45.011940 systemd[1]: Started sshd@0-10.0.15.225:22-4.153.228.146:46412.service - OpenSSH per-connection server daemon (4.153.228.146:46412). Jan 13 23:47:45.014983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 23:47:45.030764 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 23:47:45.031049 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 23:47:45.033667 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 23:47:45.059371 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 23:47:45.062690 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 23:47:45.066358 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 23:47:45.067603 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 23:47:45.485117 systemd-networkd[1586]: eth0: Gained IPv6LL Jan 13 23:47:45.488000 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 23:47:45.489510 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 23:47:45.491781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:47:45.493900 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 23:47:45.514005 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:45.529100 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 23:47:45.570101 sshd[1758]: Accepted publickey for core from 4.153.228.146 port 46412 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:47:45.571675 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:45.585602 systemd-logind[1653]: New session 1 of user core. Jan 13 23:47:45.587644 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 23:47:45.590195 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 23:47:45.619027 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 23:47:45.622750 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 23:47:45.641433 (systemd)[1786]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:45.644121 systemd-logind[1653]: New session 2 of user core. Jan 13 23:47:45.758808 systemd[1786]: Queued start job for default target default.target. Jan 13 23:47:45.769889 systemd[1786]: Created slice app.slice - User Application Slice. Jan 13 23:47:45.769931 systemd[1786]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 13 23:47:45.769943 systemd[1786]: Reached target paths.target - Paths. Jan 13 23:47:45.770017 systemd[1786]: Reached target timers.target - Timers. Jan 13 23:47:45.771289 systemd[1786]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 23:47:45.772076 systemd[1786]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 13 23:47:45.782735 systemd[1786]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 13 23:47:45.784167 systemd[1786]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 23:47:45.784279 systemd[1786]: Reached target sockets.target - Sockets. Jan 13 23:47:45.784328 systemd[1786]: Reached target basic.target - Basic System. Jan 13 23:47:45.784360 systemd[1786]: Reached target default.target - Main User Target. Jan 13 23:47:45.784386 systemd[1786]: Startup finished in 134ms. Jan 13 23:47:45.784626 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 23:47:45.787117 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 23:47:45.800036 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:46.103404 systemd[1]: Started sshd@1-10.0.15.225:22-4.153.228.146:46408.service - OpenSSH per-connection server daemon (4.153.228.146:46408). Jan 13 23:47:46.395799 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:47:46.400195 (kubelet)[1809]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:47:46.621919 sshd[1801]: Accepted publickey for core from 4.153.228.146 port 46408 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:47:46.623320 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:46.628410 systemd-logind[1653]: New session 3 of user core. Jan 13 23:47:46.638265 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 23:47:46.917006 sshd[1816]: Connection closed by 4.153.228.146 port 46408 Jan 13 23:47:46.916511 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:46.920747 systemd[1]: sshd@1-10.0.15.225:22-4.153.228.146:46408.service: Deactivated successfully. Jan 13 23:47:46.925337 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 23:47:46.926208 systemd-logind[1653]: Session 3 logged out. Waiting for processes to exit. Jan 13 23:47:46.929390 systemd-logind[1653]: Removed session 3. Jan 13 23:47:46.950301 kubelet[1809]: E0113 23:47:46.950217 1809 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:47:46.952591 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:47:46.952736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:47:46.953323 systemd[1]: kubelet.service: Consumed 787ms CPU time, 257.9M memory peak. Jan 13 23:47:47.027073 systemd[1]: Started sshd@2-10.0.15.225:22-4.153.228.146:46418.service - OpenSSH per-connection server daemon (4.153.228.146:46418). Jan 13 23:47:47.529001 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:47.546351 sshd[1824]: Accepted publickey for core from 4.153.228.146 port 46418 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:47:47.547643 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:47.552457 systemd-logind[1653]: New session 4 of user core. Jan 13 23:47:47.563184 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 23:47:47.811026 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:47.841000 sshd[1829]: Connection closed by 4.153.228.146 port 46418 Jan 13 23:47:47.840528 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:47.844900 systemd[1]: sshd@2-10.0.15.225:22-4.153.228.146:46418.service: Deactivated successfully. Jan 13 23:47:47.846612 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 23:47:47.847382 systemd-logind[1653]: Session 4 logged out. Waiting for processes to exit. Jan 13 23:47:47.848649 systemd-logind[1653]: Removed session 4. Jan 13 23:47:51.540004 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:51.546285 coreos-metadata[1639]: Jan 13 23:47:51.546 WARN failed to locate config-drive, using the metadata service API instead Jan 13 23:47:51.562208 coreos-metadata[1639]: Jan 13 23:47:51.562 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 13 23:47:51.822998 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 13 23:47:51.828335 coreos-metadata[1726]: Jan 13 23:47:51.828 WARN failed to locate config-drive, using the metadata service API instead Jan 13 23:47:51.841243 coreos-metadata[1726]: Jan 13 23:47:51.841 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 13 23:47:52.399921 coreos-metadata[1726]: Jan 13 23:47:52.399 INFO Fetch successful Jan 13 23:47:52.399921 coreos-metadata[1726]: Jan 13 23:47:52.399 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 13 23:47:52.540168 coreos-metadata[1726]: Jan 13 23:47:52.540 INFO Fetch successful Jan 13 23:47:52.542463 unknown[1726]: wrote ssh authorized keys file for user: core Jan 13 23:47:52.569659 update-ssh-keys[1844]: Updated "/home/core/.ssh/authorized_keys" Jan 13 23:47:52.570220 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 13 23:47:52.572361 systemd[1]: Finished sshkeys.service. Jan 13 23:47:52.611632 coreos-metadata[1639]: Jan 13 23:47:52.611 INFO Fetch successful Jan 13 23:47:52.612050 coreos-metadata[1639]: Jan 13 23:47:52.611 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 13 23:47:52.752201 coreos-metadata[1639]: Jan 13 23:47:52.751 INFO Fetch successful Jan 13 23:47:52.752201 coreos-metadata[1639]: Jan 13 23:47:52.752 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 13 23:47:52.891105 coreos-metadata[1639]: Jan 13 23:47:52.891 INFO Fetch successful Jan 13 23:47:52.891105 coreos-metadata[1639]: Jan 13 23:47:52.891 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 13 23:47:53.031449 coreos-metadata[1639]: Jan 13 23:47:53.031 INFO Fetch successful Jan 13 23:47:53.031449 coreos-metadata[1639]: Jan 13 23:47:53.031 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 13 23:47:53.172832 coreos-metadata[1639]: Jan 13 23:47:53.172 INFO Fetch successful Jan 13 23:47:53.172832 coreos-metadata[1639]: Jan 13 23:47:53.172 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 13 23:47:53.310077 coreos-metadata[1639]: Jan 13 23:47:53.309 INFO Fetch successful Jan 13 23:47:53.337072 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 13 23:47:53.337496 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 23:47:53.337629 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 23:47:53.341089 systemd[1]: Startup finished in 2.556s (kernel) + 13.120s (initrd) + 11.154s (userspace) = 26.832s. Jan 13 23:47:57.203409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 23:47:57.204881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:47:57.347986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:47:57.352023 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:47:57.389307 kubelet[1860]: E0113 23:47:57.389246 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:47:57.392285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:47:57.392416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:47:57.392752 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.5M memory peak. Jan 13 23:47:57.951326 systemd[1]: Started sshd@3-10.0.15.225:22-4.153.228.146:42682.service - OpenSSH per-connection server daemon (4.153.228.146:42682). Jan 13 23:47:58.479741 sshd[1869]: Accepted publickey for core from 4.153.228.146 port 42682 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:47:58.481070 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:58.486003 systemd-logind[1653]: New session 5 of user core. Jan 13 23:47:58.500330 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 23:47:58.773063 sshd[1873]: Connection closed by 4.153.228.146 port 42682 Jan 13 23:47:58.773328 sshd-session[1869]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:58.776733 systemd[1]: sshd@3-10.0.15.225:22-4.153.228.146:42682.service: Deactivated successfully. Jan 13 23:47:58.778373 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 23:47:58.779670 systemd-logind[1653]: Session 5 logged out. Waiting for processes to exit. Jan 13 23:47:58.781005 systemd-logind[1653]: Removed session 5. Jan 13 23:47:58.887533 systemd[1]: Started sshd@4-10.0.15.225:22-4.153.228.146:42698.service - OpenSSH per-connection server daemon (4.153.228.146:42698). Jan 13 23:47:59.443923 sshd[1879]: Accepted publickey for core from 4.153.228.146 port 42698 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:47:59.445341 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:47:59.450309 systemd-logind[1653]: New session 6 of user core. Jan 13 23:47:59.464350 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 23:47:59.742443 sshd[1883]: Connection closed by 4.153.228.146 port 42698 Jan 13 23:47:59.742684 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jan 13 23:47:59.746673 systemd[1]: sshd@4-10.0.15.225:22-4.153.228.146:42698.service: Deactivated successfully. Jan 13 23:47:59.748365 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 23:47:59.749625 systemd-logind[1653]: Session 6 logged out. Waiting for processes to exit. Jan 13 23:47:59.750746 systemd-logind[1653]: Removed session 6. Jan 13 23:47:59.853800 systemd[1]: Started sshd@5-10.0.15.225:22-4.153.228.146:42712.service - OpenSSH per-connection server daemon (4.153.228.146:42712). Jan 13 23:48:00.375006 sshd[1889]: Accepted publickey for core from 4.153.228.146 port 42712 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:48:00.376359 sshd-session[1889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:00.380477 systemd-logind[1653]: New session 7 of user core. Jan 13 23:48:00.391182 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 23:48:00.669377 sshd[1893]: Connection closed by 4.153.228.146 port 42712 Jan 13 23:48:00.670137 sshd-session[1889]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:00.674308 systemd[1]: sshd@5-10.0.15.225:22-4.153.228.146:42712.service: Deactivated successfully. Jan 13 23:48:00.675866 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 23:48:00.676542 systemd-logind[1653]: Session 7 logged out. Waiting for processes to exit. Jan 13 23:48:00.677430 systemd-logind[1653]: Removed session 7. Jan 13 23:48:00.780433 systemd[1]: Started sshd@6-10.0.15.225:22-4.153.228.146:42722.service - OpenSSH per-connection server daemon (4.153.228.146:42722). Jan 13 23:48:01.299925 sshd[1899]: Accepted publickey for core from 4.153.228.146 port 42722 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:48:01.300775 sshd-session[1899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:01.304568 systemd-logind[1653]: New session 8 of user core. Jan 13 23:48:01.311109 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 23:48:01.507230 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 23:48:01.507496 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:48:01.520154 sudo[1904]: pam_unix(sudo:session): session closed for user root Jan 13 23:48:01.618719 sshd[1903]: Connection closed by 4.153.228.146 port 42722 Jan 13 23:48:01.617427 sshd-session[1899]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:01.621972 systemd-logind[1653]: Session 8 logged out. Waiting for processes to exit. Jan 13 23:48:01.622245 systemd[1]: sshd@6-10.0.15.225:22-4.153.228.146:42722.service: Deactivated successfully. Jan 13 23:48:01.623755 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 23:48:01.627554 systemd-logind[1653]: Removed session 8. Jan 13 23:48:01.734818 systemd[1]: Started sshd@7-10.0.15.225:22-4.153.228.146:42726.service - OpenSSH per-connection server daemon (4.153.228.146:42726). Jan 13 23:48:02.255016 sshd[1911]: Accepted publickey for core from 4.153.228.146 port 42726 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:48:02.256263 sshd-session[1911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:02.260214 systemd-logind[1653]: New session 9 of user core. Jan 13 23:48:02.271150 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 23:48:02.454081 sudo[1917]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 23:48:02.454366 sudo[1917]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:48:02.456872 sudo[1917]: pam_unix(sudo:session): session closed for user root Jan 13 23:48:02.462783 sudo[1916]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 23:48:02.463053 sudo[1916]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:48:02.470776 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 23:48:02.502000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:48:02.504030 kernel: kauditd_printk_skb: 188 callbacks suppressed Jan 13 23:48:02.504084 kernel: audit: type=1305 audit(1768348082.502:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 13 23:48:02.504104 augenrules[1941]: No rules Jan 13 23:48:02.502000 audit[1941]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda739910 a2=420 a3=0 items=0 ppid=1922 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:02.506847 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 23:48:02.508363 kernel: audit: type=1300 audit(1768348082.502:232): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda739910 a2=420 a3=0 items=0 ppid=1922 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:02.508360 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 23:48:02.502000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:48:02.509637 sudo[1916]: pam_unix(sudo:session): session closed for user root Jan 13 23:48:02.510117 kernel: audit: type=1327 audit(1768348082.502:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 13 23:48:02.510156 kernel: audit: type=1130 audit(1768348082.508:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.514174 kernel: audit: type=1131 audit(1768348082.508:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.514288 kernel: audit: type=1106 audit(1768348082.508:235): pid=1916 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.508000 audit[1916]: USER_END pid=1916 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.516382 kernel: audit: type=1104 audit(1768348082.508:236): pid=1916 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.508000 audit[1916]: CRED_DISP pid=1916 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.606668 sshd[1915]: Connection closed by 4.153.228.146 port 42726 Jan 13 23:48:02.606434 sshd-session[1911]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:02.608000 audit[1911]: USER_END pid=1911 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:02.612100 systemd[1]: sshd@7-10.0.15.225:22-4.153.228.146:42726.service: Deactivated successfully. Jan 13 23:48:02.608000 audit[1911]: CRED_DISP pid=1911 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:02.613729 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 23:48:02.615087 kernel: audit: type=1106 audit(1768348082.608:237): pid=1911 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:02.615136 kernel: audit: type=1104 audit(1768348082.608:238): pid=1911 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:02.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.15.225:22-4.153.228.146:42726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.617866 kernel: audit: type=1131 audit(1768348082.611:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.15.225:22-4.153.228.146:42726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:02.618096 systemd-logind[1653]: Session 9 logged out. Waiting for processes to exit. Jan 13 23:48:02.619277 systemd-logind[1653]: Removed session 9. Jan 13 23:48:02.716231 systemd[1]: Started sshd@8-10.0.15.225:22-4.153.228.146:42730.service - OpenSSH per-connection server daemon (4.153.228.146:42730). Jan 13 23:48:02.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.15.225:22-4.153.228.146:42730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:03.251000 audit[1950]: USER_ACCT pid=1950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:03.252680 sshd[1950]: Accepted publickey for core from 4.153.228.146 port 42730 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:48:03.253000 audit[1950]: CRED_ACQ pid=1950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:03.253000 audit[1950]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeea53660 a2=3 a3=0 items=0 ppid=1 pid=1950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:03.253000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:48:03.254480 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:48:03.258202 systemd-logind[1653]: New session 10 of user core. Jan 13 23:48:03.265322 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 23:48:03.266000 audit[1950]: USER_START pid=1950 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:03.268000 audit[1954]: CRED_ACQ pid=1954 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:03.451000 audit[1955]: USER_ACCT pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:03.453037 sudo[1955]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 23:48:03.452000 audit[1955]: CRED_REFR pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:03.452000 audit[1955]: USER_START pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:03.453304 sudo[1955]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 23:48:03.771749 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 23:48:03.784520 (dockerd)[1977]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 23:48:04.014773 dockerd[1977]: time="2026-01-13T23:48:04.014714000Z" level=info msg="Starting up" Jan 13 23:48:04.015591 dockerd[1977]: time="2026-01-13T23:48:04.015546120Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 13 23:48:04.025915 dockerd[1977]: time="2026-01-13T23:48:04.025806960Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 13 23:48:04.083667 dockerd[1977]: time="2026-01-13T23:48:04.083628920Z" level=info msg="Loading containers: start." Jan 13 23:48:04.093996 kernel: Initializing XFRM netlink socket Jan 13 23:48:04.139000 audit[2028]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.139000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc4e36400 a2=0 a3=0 items=0 ppid=1977 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:48:04.141000 audit[2030]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.141000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcb7a4de0 a2=0 a3=0 items=0 ppid=1977 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.141000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:48:04.143000 audit[2032]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.143000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee2a4180 a2=0 a3=0 items=0 ppid=1977 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.143000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:48:04.144000 audit[2034]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.144000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb01f670 a2=0 a3=0 items=0 ppid=1977 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:48:04.146000 audit[2036]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.146000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee94fb00 a2=0 a3=0 items=0 ppid=1977 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:48:04.148000 audit[2038]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.148000 audit[2038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdce539e0 a2=0 a3=0 items=0 ppid=1977 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:48:04.150000 audit[2040]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.150000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffce99a6e0 a2=0 a3=0 items=0 ppid=1977 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:48:04.152000 audit[2042]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.152000 audit[2042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd77fe3f0 a2=0 a3=0 items=0 ppid=1977 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:48:04.184000 audit[2045]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.184000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffe0645b80 a2=0 a3=0 items=0 ppid=1977 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 13 23:48:04.186000 audit[2047]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.186000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdebb9d70 a2=0 a3=0 items=0 ppid=1977 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.186000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:48:04.189000 audit[2049]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.189000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffefda0cf0 a2=0 a3=0 items=0 ppid=1977 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:48:04.191000 audit[2051]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.191000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff8db8e60 a2=0 a3=0 items=0 ppid=1977 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.191000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:48:04.193000 audit[2053]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.193000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff2312c70 a2=0 a3=0 items=0 ppid=1977 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:48:04.232000 audit[2083]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.232000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe24b29b0 a2=0 a3=0 items=0 ppid=1977 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.232000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 13 23:48:04.234000 audit[2085]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.234000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffca509d00 a2=0 a3=0 items=0 ppid=1977 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 13 23:48:04.237000 audit[2087]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.237000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffddd00b70 a2=0 a3=0 items=0 ppid=1977 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.237000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 13 23:48:04.239000 audit[2089]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.239000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffba345a0 a2=0 a3=0 items=0 ppid=1977 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 13 23:48:04.241000 audit[2091]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.241000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcfaa8180 a2=0 a3=0 items=0 ppid=1977 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 13 23:48:04.243000 audit[2093]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.243000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe1c11380 a2=0 a3=0 items=0 ppid=1977 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.243000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:48:04.244000 audit[2095]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.244000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffcc8ae60 a2=0 a3=0 items=0 ppid=1977 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.244000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:48:04.250000 audit[2097]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.250000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffffc398020 a2=0 a3=0 items=0 ppid=1977 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.250000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 13 23:48:04.252000 audit[2099]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.252000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff049c4d0 a2=0 a3=0 items=0 ppid=1977 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.252000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 13 23:48:04.254000 audit[2101]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.254000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffda871590 a2=0 a3=0 items=0 ppid=1977 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.254000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 13 23:48:04.256000 audit[2103]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.256000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe98f83e0 a2=0 a3=0 items=0 ppid=1977 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.256000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 13 23:48:04.258000 audit[2105]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.258000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcefe96f0 a2=0 a3=0 items=0 ppid=1977 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.258000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 13 23:48:04.260000 audit[2107]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.260000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffca52fdf0 a2=0 a3=0 items=0 ppid=1977 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.260000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 13 23:48:04.265000 audit[2112]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.265000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffefb5dd70 a2=0 a3=0 items=0 ppid=1977 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.265000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:48:04.268000 audit[2114]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.268000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff70415b0 a2=0 a3=0 items=0 ppid=1977 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:48:04.270000 audit[2116]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.270000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce1f8ec0 a2=0 a3=0 items=0 ppid=1977 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:48:04.272000 audit[2118]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.272000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdcfe4bc0 a2=0 a3=0 items=0 ppid=1977 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 13 23:48:04.280000 audit[2120]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.280000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc4dfea00 a2=0 a3=0 items=0 ppid=1977 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.280000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 13 23:48:04.282000 audit[2122]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:04.282000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffeedb22c0 a2=0 a3=0 items=0 ppid=1977 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.282000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 13 23:48:04.300000 audit[2127]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.300000 audit[2127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff1fcc990 a2=0 a3=0 items=0 ppid=1977 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.300000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 13 23:48:04.302000 audit[2129]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.302000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd0a776b0 a2=0 a3=0 items=0 ppid=1977 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.302000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 13 23:48:04.310000 audit[2137]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.310000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffe2decfb0 a2=0 a3=0 items=0 ppid=1977 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 13 23:48:04.320000 audit[2143]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.320000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd40a4aa0 a2=0 a3=0 items=0 ppid=1977 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 13 23:48:04.322000 audit[2145]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.322000 audit[2145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffffaf74cd0 a2=0 a3=0 items=0 ppid=1977 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.322000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 13 23:48:04.324000 audit[2147]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.324000 audit[2147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc9ef8940 a2=0 a3=0 items=0 ppid=1977 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.324000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 13 23:48:04.326000 audit[2149]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.326000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff6e16800 a2=0 a3=0 items=0 ppid=1977 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.326000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 13 23:48:04.327000 audit[2151]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:04.327000 audit[2151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc6f03360 a2=0 a3=0 items=0 ppid=1977 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:04.327000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 13 23:48:04.329405 systemd-networkd[1586]: docker0: Link UP Jan 13 23:48:04.333342 dockerd[1977]: time="2026-01-13T23:48:04.333283520Z" level=info msg="Loading containers: done." Jan 13 23:48:04.345623 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1478471318-merged.mount: Deactivated successfully. Jan 13 23:48:04.356278 dockerd[1977]: time="2026-01-13T23:48:04.356197880Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 23:48:04.356435 dockerd[1977]: time="2026-01-13T23:48:04.356287680Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 13 23:48:04.356480 dockerd[1977]: time="2026-01-13T23:48:04.356458840Z" level=info msg="Initializing buildkit" Jan 13 23:48:04.383470 dockerd[1977]: time="2026-01-13T23:48:04.383400920Z" level=info msg="Completed buildkit initialization" Jan 13 23:48:04.389406 dockerd[1977]: time="2026-01-13T23:48:04.389331160Z" level=info msg="Daemon has completed initialization" Jan 13 23:48:04.389551 dockerd[1977]: time="2026-01-13T23:48:04.389405840Z" level=info msg="API listen on /run/docker.sock" Jan 13 23:48:04.389749 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 23:48:04.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:05.464550 containerd[1676]: time="2026-01-13T23:48:05.464491960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 13 23:48:06.060857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1754917111.mount: Deactivated successfully. Jan 13 23:48:06.829555 containerd[1676]: time="2026-01-13T23:48:06.829469480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:06.831792 containerd[1676]: time="2026-01-13T23:48:06.831741560Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 13 23:48:06.832640 containerd[1676]: time="2026-01-13T23:48:06.832596200Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:06.835871 containerd[1676]: time="2026-01-13T23:48:06.835827160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:06.836775 containerd[1676]: time="2026-01-13T23:48:06.836740600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.37219928s" Jan 13 23:48:06.836839 containerd[1676]: time="2026-01-13T23:48:06.836779680Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 13 23:48:06.837444 containerd[1676]: time="2026-01-13T23:48:06.837403560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 13 23:48:07.642821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 23:48:07.644467 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:07.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:07.780044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:07.782856 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 13 23:48:07.782934 kernel: audit: type=1130 audit(1768348087.779:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:07.784534 (kubelet)[2257]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:48:08.310524 chronyd[1637]: Selected source PHC0 Jan 13 23:48:08.374567 kubelet[2257]: E0113 23:48:08.374491 2257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:48:08.376525 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:48:08.376646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:48:08.376985 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.1M memory peak. Jan 13 23:48:08.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:48:08.379969 kernel: audit: type=1131 audit(1768348088.375:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:48:09.006416 containerd[1676]: time="2026-01-13T23:48:09.006371138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:09.007819 containerd[1676]: time="2026-01-13T23:48:09.007780208Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 13 23:48:09.009416 containerd[1676]: time="2026-01-13T23:48:09.009365022Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:09.011769 containerd[1676]: time="2026-01-13T23:48:09.011741731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:09.012825 containerd[1676]: time="2026-01-13T23:48:09.012613008Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 2.175174075s" Jan 13 23:48:09.012825 containerd[1676]: time="2026-01-13T23:48:09.012650665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 13 23:48:09.013128 containerd[1676]: time="2026-01-13T23:48:09.013107974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 13 23:48:10.012699 containerd[1676]: time="2026-01-13T23:48:10.012658571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:10.014103 containerd[1676]: time="2026-01-13T23:48:10.014057778Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 13 23:48:10.015381 containerd[1676]: time="2026-01-13T23:48:10.014968142Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:10.017283 containerd[1676]: time="2026-01-13T23:48:10.017241707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:10.018088 containerd[1676]: time="2026-01-13T23:48:10.018059451Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.004922363s" Jan 13 23:48:10.018128 containerd[1676]: time="2026-01-13T23:48:10.018092231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 13 23:48:10.018797 containerd[1676]: time="2026-01-13T23:48:10.018753481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 13 23:48:10.822490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886076422.mount: Deactivated successfully. Jan 13 23:48:11.029865 containerd[1676]: time="2026-01-13T23:48:11.029805660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:11.031140 containerd[1676]: time="2026-01-13T23:48:11.031094023Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 13 23:48:11.032110 containerd[1676]: time="2026-01-13T23:48:11.032086411Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:11.034767 containerd[1676]: time="2026-01-13T23:48:11.034721877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:11.035129 containerd[1676]: time="2026-01-13T23:48:11.035095439Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.016303862s" Jan 13 23:48:11.035164 containerd[1676]: time="2026-01-13T23:48:11.035127743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 13 23:48:11.035631 containerd[1676]: time="2026-01-13T23:48:11.035607528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 13 23:48:11.497754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315726661.mount: Deactivated successfully. Jan 13 23:48:12.027187 containerd[1676]: time="2026-01-13T23:48:12.027140333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:12.027849 containerd[1676]: time="2026-01-13T23:48:12.027805360Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 13 23:48:12.028795 containerd[1676]: time="2026-01-13T23:48:12.028749788Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:12.031861 containerd[1676]: time="2026-01-13T23:48:12.031810553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:12.033140 containerd[1676]: time="2026-01-13T23:48:12.033014362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 997.37145ms" Jan 13 23:48:12.033140 containerd[1676]: time="2026-01-13T23:48:12.033058839Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 13 23:48:12.033707 containerd[1676]: time="2026-01-13T23:48:12.033608291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 13 23:48:12.499544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607214449.mount: Deactivated successfully. Jan 13 23:48:12.505968 containerd[1676]: time="2026-01-13T23:48:12.505903175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:48:12.508179 containerd[1676]: time="2026-01-13T23:48:12.508045235Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 13 23:48:12.509139 containerd[1676]: time="2026-01-13T23:48:12.509104285Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:48:12.511575 containerd[1676]: time="2026-01-13T23:48:12.511515988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 23:48:12.512328 containerd[1676]: time="2026-01-13T23:48:12.512168074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 478.489389ms" Jan 13 23:48:12.512328 containerd[1676]: time="2026-01-13T23:48:12.512201674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 13 23:48:12.512600 containerd[1676]: time="2026-01-13T23:48:12.512573238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 13 23:48:13.050132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399464663.mount: Deactivated successfully. Jan 13 23:48:14.723583 containerd[1676]: time="2026-01-13T23:48:14.723490943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:14.724735 containerd[1676]: time="2026-01-13T23:48:14.724670229Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 13 23:48:14.725818 containerd[1676]: time="2026-01-13T23:48:14.725772114Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:14.729313 containerd[1676]: time="2026-01-13T23:48:14.729275611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:14.730500 containerd[1676]: time="2026-01-13T23:48:14.730332096Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.217720018s" Jan 13 23:48:14.730500 containerd[1676]: time="2026-01-13T23:48:14.730363936Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 13 23:48:18.627484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 13 23:48:18.628875 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:18.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:18.791747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:18.794986 kernel: audit: type=1130 audit(1768348098.791:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:18.800749 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 23:48:18.834834 kubelet[2423]: E0113 23:48:18.834759 2423 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 23:48:18.836789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 23:48:18.836933 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 23:48:18.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:48:18.839716 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Jan 13 23:48:18.840998 kernel: audit: type=1131 audit(1768348098.836:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 13 23:48:19.388327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:19.388483 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.2M memory peak. Jan 13 23:48:19.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.390522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:19.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.395008 kernel: audit: type=1130 audit(1768348099.387:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.395086 kernel: audit: type=1131 audit(1768348099.387:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.415277 systemd[1]: Reload requested from client PID 2438 ('systemctl') (unit session-10.scope)... Jan 13 23:48:19.415296 systemd[1]: Reloading... Jan 13 23:48:19.502007 zram_generator::config[2493]: No configuration found. Jan 13 23:48:19.662605 systemd[1]: Reloading finished in 247 ms. Jan 13 23:48:19.689000 audit: BPF prog-id=63 op=LOAD Jan 13 23:48:19.689000 audit: BPF prog-id=52 op=UNLOAD Jan 13 23:48:19.693610 kernel: audit: type=1334 audit(1768348099.689:296): prog-id=63 op=LOAD Jan 13 23:48:19.693661 kernel: audit: type=1334 audit(1768348099.689:297): prog-id=52 op=UNLOAD Jan 13 23:48:19.693678 kernel: audit: type=1334 audit(1768348099.689:298): prog-id=64 op=LOAD Jan 13 23:48:19.693695 kernel: audit: type=1334 audit(1768348099.689:299): prog-id=65 op=LOAD Jan 13 23:48:19.693718 kernel: audit: type=1334 audit(1768348099.689:300): prog-id=53 op=UNLOAD Jan 13 23:48:19.693732 kernel: audit: type=1334 audit(1768348099.689:301): prog-id=54 op=UNLOAD Jan 13 23:48:19.689000 audit: BPF prog-id=64 op=LOAD Jan 13 23:48:19.689000 audit: BPF prog-id=65 op=LOAD Jan 13 23:48:19.689000 audit: BPF prog-id=53 op=UNLOAD Jan 13 23:48:19.689000 audit: BPF prog-id=54 op=UNLOAD Jan 13 23:48:19.690000 audit: BPF prog-id=66 op=LOAD Jan 13 23:48:19.690000 audit: BPF prog-id=67 op=LOAD Jan 13 23:48:19.690000 audit: BPF prog-id=56 op=UNLOAD Jan 13 23:48:19.690000 audit: BPF prog-id=57 op=UNLOAD Jan 13 23:48:19.691000 audit: BPF prog-id=68 op=LOAD Jan 13 23:48:19.691000 audit: BPF prog-id=49 op=UNLOAD Jan 13 23:48:19.693000 audit: BPF prog-id=69 op=LOAD Jan 13 23:48:19.694000 audit: BPF prog-id=70 op=LOAD Jan 13 23:48:19.694000 audit: BPF prog-id=50 op=UNLOAD Jan 13 23:48:19.694000 audit: BPF prog-id=51 op=UNLOAD Jan 13 23:48:19.695000 audit: BPF prog-id=71 op=LOAD Jan 13 23:48:19.695000 audit: BPF prog-id=55 op=UNLOAD Jan 13 23:48:19.695000 audit: BPF prog-id=72 op=LOAD Jan 13 23:48:19.695000 audit: BPF prog-id=43 op=UNLOAD Jan 13 23:48:19.695000 audit: BPF prog-id=73 op=LOAD Jan 13 23:48:19.695000 audit: BPF prog-id=74 op=LOAD Jan 13 23:48:19.695000 audit: BPF prog-id=44 op=UNLOAD Jan 13 23:48:19.695000 audit: BPF prog-id=45 op=UNLOAD Jan 13 23:48:19.696000 audit: BPF prog-id=75 op=LOAD Jan 13 23:48:19.713000 audit: BPF prog-id=59 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=76 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=60 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=77 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=78 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=61 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=62 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=79 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=46 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=80 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=81 op=LOAD Jan 13 23:48:19.715000 audit: BPF prog-id=47 op=UNLOAD Jan 13 23:48:19.715000 audit: BPF prog-id=48 op=UNLOAD Jan 13 23:48:19.716000 audit: BPF prog-id=82 op=LOAD Jan 13 23:48:19.716000 audit: BPF prog-id=58 op=UNLOAD Jan 13 23:48:19.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.728740 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:19.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:19.732494 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:48:19.732739 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:19.732793 systemd[1]: kubelet.service: Consumed 96ms CPU time, 95.3M memory peak. Jan 13 23:48:19.734353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:20.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:20.648088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:20.665282 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:48:20.862866 kubelet[2534]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:48:20.862866 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:48:20.862866 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:48:20.865217 kubelet[2534]: I0113 23:48:20.865146 2534 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:48:21.727556 kubelet[2534]: I0113 23:48:21.727492 2534 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:48:21.727556 kubelet[2534]: I0113 23:48:21.727530 2534 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:48:21.727822 kubelet[2534]: I0113 23:48:21.727792 2534 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:48:21.757446 kubelet[2534]: E0113 23:48:21.757397 2534 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.15.225:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.15.225:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:48:21.758384 kubelet[2534]: I0113 23:48:21.758274 2534 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:48:21.765351 kubelet[2534]: I0113 23:48:21.765328 2534 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:48:21.768056 kubelet[2534]: I0113 23:48:21.768033 2534 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:48:21.769855 kubelet[2534]: I0113 23:48:21.769791 2534 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:48:21.770018 kubelet[2534]: I0113 23:48:21.769842 2534 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578-0-0-p-89582bef9b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:48:21.770127 kubelet[2534]: I0113 23:48:21.770109 2534 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:48:21.770127 kubelet[2534]: I0113 23:48:21.770119 2534 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:48:21.770358 kubelet[2534]: I0113 23:48:21.770321 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:48:21.775321 kubelet[2534]: I0113 23:48:21.775282 2534 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:48:21.775321 kubelet[2534]: I0113 23:48:21.775314 2534 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:48:21.775429 kubelet[2534]: I0113 23:48:21.775423 2534 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:48:21.775460 kubelet[2534]: I0113 23:48:21.775437 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:48:21.780100 kubelet[2534]: W0113 23:48:21.780036 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.15.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.15.225:6443: connect: connection refused Jan 13 23:48:21.780173 kubelet[2534]: E0113 23:48:21.780103 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.15.225:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.15.225:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:48:21.780173 kubelet[2534]: W0113 23:48:21.780051 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.15.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578-0-0-p-89582bef9b&limit=500&resourceVersion=0": dial tcp 10.0.15.225:6443: connect: connection refused Jan 13 23:48:21.780173 kubelet[2534]: E0113 23:48:21.780138 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.15.225:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4578-0-0-p-89582bef9b&limit=500&resourceVersion=0\": dial tcp 10.0.15.225:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:48:21.783884 kubelet[2534]: I0113 23:48:21.783861 2534 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:48:21.785991 kubelet[2534]: I0113 23:48:21.784832 2534 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:48:21.785991 kubelet[2534]: W0113 23:48:21.785050 2534 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 23:48:21.786427 kubelet[2534]: I0113 23:48:21.786402 2534 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:48:21.786460 kubelet[2534]: I0113 23:48:21.786449 2534 server.go:1287] "Started kubelet" Jan 13 23:48:21.786561 kubelet[2534]: I0113 23:48:21.786526 2534 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:48:21.786827 kubelet[2534]: I0113 23:48:21.786773 2534 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:48:21.787152 kubelet[2534]: I0113 23:48:21.787118 2534 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:48:21.788272 kubelet[2534]: I0113 23:48:21.788246 2534 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:48:21.788900 kubelet[2534]: I0113 23:48:21.788858 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:48:21.789819 kubelet[2534]: I0113 23:48:21.789789 2534 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:48:21.792412 kubelet[2534]: E0113 23:48:21.792372 2534 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578-0-0-p-89582bef9b\" not found" Jan 13 23:48:21.793019 kubelet[2534]: I0113 23:48:21.792518 2534 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:48:21.793019 kubelet[2534]: I0113 23:48:21.792706 2534 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:48:21.793019 kubelet[2534]: I0113 23:48:21.792760 2534 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:48:21.793118 kubelet[2534]: E0113 23:48:21.793062 2534 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:48:21.792000 audit[2547]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.792000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe8a52f20 a2=0 a3=0 items=0 ppid=2534 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:48:21.793747 kubelet[2534]: W0113 23:48:21.793443 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.15.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.15.225:6443: connect: connection refused Jan 13 23:48:21.793855 kubelet[2534]: E0113 23:48:21.793831 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.15.225:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.15.225:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:48:21.793928 kubelet[2534]: E0113 23:48:21.793658 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.15.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-p-89582bef9b?timeout=10s\": dial tcp 10.0.15.225:6443: connect: connection refused" interval="200ms" Jan 13 23:48:21.794007 kubelet[2534]: I0113 23:48:21.793913 2534 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:48:21.793000 audit[2548]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.793000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1c41090 a2=0 a3=0 items=0 ppid=2534 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:48:21.795162 kubelet[2534]: I0113 23:48:21.794951 2534 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:48:21.795162 kubelet[2534]: I0113 23:48:21.794977 2534 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:48:21.795162 kubelet[2534]: E0113 23:48:21.794861 2534 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.15.225:6443/api/v1/namespaces/default/events\": dial tcp 10.0.15.225:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4578-0-0-p-89582bef9b.188a6f3abe375511 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4578-0-0-p-89582bef9b,UID:ci-4578-0-0-p-89582bef9b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4578-0-0-p-89582bef9b,},FirstTimestamp:2026-01-13 23:48:21.786424593 +0000 UTC m=+1.118069088,LastTimestamp:2026-01-13 23:48:21.786424593 +0000 UTC m=+1.118069088,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578-0-0-p-89582bef9b,}" Jan 13 23:48:21.795000 audit[2550]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.795000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc89e1820 a2=0 a3=0 items=0 ppid=2534 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.795000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:48:21.797000 audit[2552]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.797000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc6ef0a90 a2=0 a3=0 items=0 ppid=2534 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:48:21.807424 kubelet[2534]: I0113 23:48:21.807398 2534 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:48:21.807424 kubelet[2534]: I0113 23:48:21.807416 2534 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:48:21.807548 kubelet[2534]: I0113 23:48:21.807434 2534 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:48:21.806000 audit[2555]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.806000 audit[2555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe7b134b0 a2=0 a3=0 items=0 ppid=2534 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 13 23:48:21.808511 kubelet[2534]: I0113 23:48:21.808475 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:48:21.808000 audit[2557]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:21.808000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe4c182f0 a2=0 a3=0 items=0 ppid=2534 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 13 23:48:21.810199 kubelet[2534]: I0113 23:48:21.809584 2534 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:48:21.810199 kubelet[2534]: I0113 23:48:21.809604 2534 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:48:21.810199 kubelet[2534]: I0113 23:48:21.809621 2534 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:48:21.810199 kubelet[2534]: I0113 23:48:21.809627 2534 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:48:21.810199 kubelet[2534]: E0113 23:48:21.809668 2534 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:48:21.809000 audit[2558]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.809000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe65f5450 a2=0 a3=0 items=0 ppid=2534 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:48:21.810683 kubelet[2534]: I0113 23:48:21.810590 2534 policy_none.go:49] "None policy: Start" Jan 13 23:48:21.810683 kubelet[2534]: I0113 23:48:21.810611 2534 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:48:21.810683 kubelet[2534]: I0113 23:48:21.810622 2534 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:48:21.810000 audit[2559]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.810000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7a2ddc0 a2=0 a3=0 items=0 ppid=2534 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:48:21.811000 audit[2560]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:21.811000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9b7d8b0 a2=0 a3=0 items=0 ppid=2534 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:48:21.812000 audit[2561]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:21.812000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd2e9af0 a2=0 a3=0 items=0 ppid=2534 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 13 23:48:21.813000 audit[2562]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:21.813000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe966dd80 a2=0 a3=0 items=0 ppid=2534 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 13 23:48:21.814000 audit[2565]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:21.814000 audit[2565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff6f6b3a0 a2=0 a3=0 items=0 ppid=2534 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:21.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 13 23:48:21.816971 kubelet[2534]: W0113 23:48:21.816825 2534 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.15.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.15.225:6443: connect: connection refused Jan 13 23:48:21.816971 kubelet[2534]: E0113 23:48:21.816883 2534 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.15.225:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.15.225:6443: connect: connection refused" logger="UnhandledError" Jan 13 23:48:21.819058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 23:48:21.832375 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 23:48:21.835605 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 23:48:21.846522 kubelet[2534]: I0113 23:48:21.846283 2534 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:48:21.846522 kubelet[2534]: I0113 23:48:21.846484 2534 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:48:21.846522 kubelet[2534]: I0113 23:48:21.846496 2534 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:48:21.847032 kubelet[2534]: I0113 23:48:21.846972 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:48:21.848359 kubelet[2534]: E0113 23:48:21.848314 2534 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:48:21.848735 kubelet[2534]: E0113 23:48:21.848712 2534 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4578-0-0-p-89582bef9b\" not found" Jan 13 23:48:21.920627 systemd[1]: Created slice kubepods-burstable-pod80239fa2e2fc7b587eb25a66418e6524.slice - libcontainer container kubepods-burstable-pod80239fa2e2fc7b587eb25a66418e6524.slice. Jan 13 23:48:21.948174 kubelet[2534]: I0113 23:48:21.948134 2534 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.949097 kubelet[2534]: E0113 23:48:21.949067 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.15.225:6443/api/v1/nodes\": dial tcp 10.0.15.225:6443: connect: connection refused" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.959795 kubelet[2534]: E0113 23:48:21.959760 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.963015 systemd[1]: Created slice kubepods-burstable-pode96fd19676b59064fa40286a2554c9dc.slice - libcontainer container kubepods-burstable-pode96fd19676b59064fa40286a2554c9dc.slice. Jan 13 23:48:21.983692 kubelet[2534]: E0113 23:48:21.983580 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.988607 systemd[1]: Created slice kubepods-burstable-podba8cfed24e375d72e9165bcfbbfd5a95.slice - libcontainer container kubepods-burstable-podba8cfed24e375d72e9165bcfbbfd5a95.slice. Jan 13 23:48:21.990733 kubelet[2534]: E0113 23:48:21.990674 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994100 kubelet[2534]: I0113 23:48:21.994065 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba8cfed24e375d72e9165bcfbbfd5a95-kubeconfig\") pod \"kube-scheduler-ci-4578-0-0-p-89582bef9b\" (UID: \"ba8cfed24e375d72e9165bcfbbfd5a95\") " pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994171 kubelet[2534]: I0113 23:48:21.994101 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-ca-certs\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994171 kubelet[2534]: I0113 23:48:21.994123 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994171 kubelet[2534]: I0113 23:48:21.994144 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-ca-certs\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994171 kubelet[2534]: I0113 23:48:21.994161 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994257 kubelet[2534]: I0113 23:48:21.994176 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-k8s-certs\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994257 kubelet[2534]: I0113 23:48:21.994191 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-k8s-certs\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994257 kubelet[2534]: I0113 23:48:21.994206 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-kubeconfig\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994257 kubelet[2534]: I0113 23:48:21.994222 2534 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:21.994616 kubelet[2534]: E0113 23:48:21.994569 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.15.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-p-89582bef9b?timeout=10s\": dial tcp 10.0.15.225:6443: connect: connection refused" interval="400ms" Jan 13 23:48:22.151229 kubelet[2534]: I0113 23:48:22.151190 2534 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.151511 kubelet[2534]: E0113 23:48:22.151486 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.15.225:6443/api/v1/nodes\": dial tcp 10.0.15.225:6443: connect: connection refused" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.261563 containerd[1676]: time="2026-01-13T23:48:22.261464291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578-0-0-p-89582bef9b,Uid:80239fa2e2fc7b587eb25a66418e6524,Namespace:kube-system,Attempt:0,}" Jan 13 23:48:22.284669 containerd[1676]: time="2026-01-13T23:48:22.284621243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578-0-0-p-89582bef9b,Uid:e96fd19676b59064fa40286a2554c9dc,Namespace:kube-system,Attempt:0,}" Jan 13 23:48:22.292536 containerd[1676]: time="2026-01-13T23:48:22.292480441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578-0-0-p-89582bef9b,Uid:ba8cfed24e375d72e9165bcfbbfd5a95,Namespace:kube-system,Attempt:0,}" Jan 13 23:48:22.363769 containerd[1676]: time="2026-01-13T23:48:22.363694185Z" level=info msg="connecting to shim e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1" address="unix:///run/containerd/s/a07d737c4fb3a9d2ac391063e882f55c84efaa98a24e9f2f584e8d67a9b578b3" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:22.389154 systemd[1]: Started cri-containerd-e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1.scope - libcontainer container e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1. Jan 13 23:48:22.397271 kubelet[2534]: E0113 23:48:22.397167 2534 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.15.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-p-89582bef9b?timeout=10s\": dial tcp 10.0.15.225:6443: connect: connection refused" interval="800ms" Jan 13 23:48:22.406106 containerd[1676]: time="2026-01-13T23:48:22.406046190Z" level=info msg="connecting to shim ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f" address="unix:///run/containerd/s/2ddec7f63de07423cb81d90512e77f77c5ef399975cb698e78704cf09571c11a" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:22.407070 containerd[1676]: time="2026-01-13T23:48:22.407030155Z" level=info msg="connecting to shim 588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656" address="unix:///run/containerd/s/c0fd7e154dabf4b4a53f97611200e5f1ca3eeb82f6570b984ca8e8dc409e32a9" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:22.407000 audit: BPF prog-id=83 op=LOAD Jan 13 23:48:22.410000 audit: BPF prog-id=84 op=LOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=84 op=UNLOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=85 op=LOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=86 op=LOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=86 op=UNLOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=85 op=UNLOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.410000 audit: BPF prog-id=87 op=LOAD Jan 13 23:48:22.410000 audit[2586]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2575 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663535356136333366346538326639636130343836623562313565 Jan 13 23:48:22.430391 systemd[1]: Started cri-containerd-588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656.scope - libcontainer container 588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656. Jan 13 23:48:22.431666 systemd[1]: Started cri-containerd-ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f.scope - libcontainer container ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f. Jan 13 23:48:22.441000 audit: BPF prog-id=88 op=LOAD Jan 13 23:48:22.442000 audit: BPF prog-id=89 op=LOAD Jan 13 23:48:22.442000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.442000 audit: BPF prog-id=89 op=UNLOAD Jan 13 23:48:22.442000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.442000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.442000 audit: BPF prog-id=90 op=LOAD Jan 13 23:48:22.443000 audit: BPF prog-id=91 op=LOAD Jan 13 23:48:22.443000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.443000 audit: BPF prog-id=92 op=LOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=92 op=UNLOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=93 op=LOAD Jan 13 23:48:22.443000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.443000 audit: BPF prog-id=93 op=UNLOAD Jan 13 23:48:22.443000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.443000 audit: BPF prog-id=91 op=UNLOAD Jan 13 23:48:22.443000 audit[2643]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.443000 audit: BPF prog-id=94 op=LOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=95 op=LOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=95 op=UNLOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: BPF prog-id=96 op=LOAD Jan 13 23:48:22.443000 audit[2643]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2622 pid=2643 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6565353665313262323164633834616236613133303736626139383931 Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=94 op=UNLOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.443000 audit: BPF prog-id=97 op=LOAD Jan 13 23:48:22.443000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2620 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.443000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538383336316565383435626465363764386337303266383734383732 Jan 13 23:48:22.447192 containerd[1676]: time="2026-01-13T23:48:22.447044948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4578-0-0-p-89582bef9b,Uid:80239fa2e2fc7b587eb25a66418e6524,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1\"" Jan 13 23:48:22.450183 containerd[1676]: time="2026-01-13T23:48:22.450149843Z" level=info msg="CreateContainer within sandbox \"e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 23:48:22.469311 containerd[1676]: time="2026-01-13T23:48:22.469196775Z" level=info msg="Container 3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:22.478513 containerd[1676]: time="2026-01-13T23:48:22.478473020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4578-0-0-p-89582bef9b,Uid:e96fd19676b59064fa40286a2554c9dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f\"" Jan 13 23:48:22.481388 containerd[1676]: time="2026-01-13T23:48:22.481361034Z" level=info msg="CreateContainer within sandbox \"ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 23:48:22.482466 containerd[1676]: time="2026-01-13T23:48:22.482381119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4578-0-0-p-89582bef9b,Uid:ba8cfed24e375d72e9165bcfbbfd5a95,Namespace:kube-system,Attempt:0,} returns sandbox id \"588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656\"" Jan 13 23:48:22.484526 containerd[1676]: time="2026-01-13T23:48:22.484397289Z" level=info msg="CreateContainer within sandbox \"588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 23:48:22.487357 containerd[1676]: time="2026-01-13T23:48:22.487319903Z" level=info msg="CreateContainer within sandbox \"e9f555a633f4e82f9ca0486b5b15e8b77b6d4013152564717ed7c9e2e5e3e5f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f\"" Jan 13 23:48:22.488675 containerd[1676]: time="2026-01-13T23:48:22.488512629Z" level=info msg="StartContainer for \"3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f\"" Jan 13 23:48:22.490315 containerd[1676]: time="2026-01-13T23:48:22.490282077Z" level=info msg="connecting to shim 3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f" address="unix:///run/containerd/s/a07d737c4fb3a9d2ac391063e882f55c84efaa98a24e9f2f584e8d67a9b578b3" protocol=ttrpc version=3 Jan 13 23:48:22.493745 containerd[1676]: time="2026-01-13T23:48:22.493711054Z" level=info msg="Container b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:22.503391 containerd[1676]: time="2026-01-13T23:48:22.503352461Z" level=info msg="Container 81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:22.509893 systemd[1]: Started cri-containerd-3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f.scope - libcontainer container 3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f. Jan 13 23:48:22.512945 containerd[1676]: time="2026-01-13T23:48:22.512065303Z" level=info msg="CreateContainer within sandbox \"ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d\"" Jan 13 23:48:22.514277 containerd[1676]: time="2026-01-13T23:48:22.513989992Z" level=info msg="StartContainer for \"b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d\"" Jan 13 23:48:22.515101 containerd[1676]: time="2026-01-13T23:48:22.515071997Z" level=info msg="connecting to shim b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d" address="unix:///run/containerd/s/2ddec7f63de07423cb81d90512e77f77c5ef399975cb698e78704cf09571c11a" protocol=ttrpc version=3 Jan 13 23:48:22.518716 containerd[1676]: time="2026-01-13T23:48:22.518666175Z" level=info msg="CreateContainer within sandbox \"588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f\"" Jan 13 23:48:22.519260 containerd[1676]: time="2026-01-13T23:48:22.519229617Z" level=info msg="StartContainer for \"81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f\"" Jan 13 23:48:22.520267 containerd[1676]: time="2026-01-13T23:48:22.520224702Z" level=info msg="connecting to shim 81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f" address="unix:///run/containerd/s/c0fd7e154dabf4b4a53f97611200e5f1ca3eeb82f6570b984ca8e8dc409e32a9" protocol=ttrpc version=3 Jan 13 23:48:22.522000 audit: BPF prog-id=98 op=LOAD Jan 13 23:48:22.523000 audit: BPF prog-id=99 op=LOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=99 op=UNLOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=100 op=LOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=101 op=LOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=101 op=UNLOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=100 op=UNLOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.523000 audit: BPF prog-id=102 op=LOAD Jan 13 23:48:22.523000 audit[2698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2575 pid=2698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.523000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363366532333537383230356663306165623264353133363438663266 Jan 13 23:48:22.536303 systemd[1]: Started cri-containerd-b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d.scope - libcontainer container b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d. Jan 13 23:48:22.547206 systemd[1]: Started cri-containerd-81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f.scope - libcontainer container 81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f. Jan 13 23:48:22.553000 audit: BPF prog-id=103 op=LOAD Jan 13 23:48:22.554000 audit: BPF prog-id=104 op=LOAD Jan 13 23:48:22.554000 audit[2720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.554000 audit: BPF prog-id=104 op=UNLOAD Jan 13 23:48:22.554000 audit[2720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.554000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.555843 kubelet[2534]: I0113 23:48:22.555818 2534 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.556204 kubelet[2534]: E0113 23:48:22.556179 2534 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.15.225:6443/api/v1/nodes\": dial tcp 10.0.15.225:6443: connect: connection refused" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.555000 audit: BPF prog-id=105 op=LOAD Jan 13 23:48:22.555000 audit[2720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.555000 audit: BPF prog-id=106 op=LOAD Jan 13 23:48:22.555000 audit[2720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.555000 audit: BPF prog-id=106 op=UNLOAD Jan 13 23:48:22.555000 audit[2720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.555000 audit: BPF prog-id=105 op=UNLOAD Jan 13 23:48:22.555000 audit[2720]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.555000 audit: BPF prog-id=107 op=LOAD Jan 13 23:48:22.555000 audit[2720]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2622 pid=2720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.555000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234316661376437326139653561356164656366343435666436643132 Jan 13 23:48:22.557883 containerd[1676]: time="2026-01-13T23:48:22.557748884Z" level=info msg="StartContainer for \"3c6e23578205fc0aeb2d513648f2f67e5cdbc25035b92243281a4c72933b4e8f\" returns successfully" Jan 13 23:48:22.562000 audit: BPF prog-id=108 op=LOAD Jan 13 23:48:22.563000 audit: BPF prog-id=109 op=LOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=109 op=UNLOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=110 op=LOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=111 op=LOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=111 op=UNLOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=110 op=UNLOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.563000 audit: BPF prog-id=112 op=LOAD Jan 13 23:48:22.563000 audit[2726]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2620 pid=2726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:22.563000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3831663664643865646432376564616533373763363663316562306463 Jan 13 23:48:22.591062 containerd[1676]: time="2026-01-13T23:48:22.590885404Z" level=info msg="StartContainer for \"b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d\" returns successfully" Jan 13 23:48:22.600882 containerd[1676]: time="2026-01-13T23:48:22.600839772Z" level=info msg="StartContainer for \"81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f\" returns successfully" Jan 13 23:48:22.821029 kubelet[2534]: E0113 23:48:22.820776 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.824925 kubelet[2534]: E0113 23:48:22.824752 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:22.828378 kubelet[2534]: E0113 23:48:22.828353 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:23.360971 kubelet[2534]: I0113 23:48:23.358099 2534 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:23.831387 kubelet[2534]: E0113 23:48:23.831290 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:23.835978 kubelet[2534]: E0113 23:48:23.835213 2534 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.173131 kubelet[2534]: E0113 23:48:24.173016 2534 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4578-0-0-p-89582bef9b\" not found" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.292270 kubelet[2534]: I0113 23:48:24.292225 2534 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.292270 kubelet[2534]: E0113 23:48:24.292276 2534 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4578-0-0-p-89582bef9b\": node \"ci-4578-0-0-p-89582bef9b\" not found" Jan 13 23:48:24.394525 kubelet[2534]: I0113 23:48:24.394016 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.403130 kubelet[2534]: E0113 23:48:24.403084 2534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.403130 kubelet[2534]: I0113 23:48:24.403118 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.405753 kubelet[2534]: E0113 23:48:24.405606 2534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.405753 kubelet[2534]: I0113 23:48:24.405639 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.407212 kubelet[2534]: E0113 23:48:24.407167 2534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578-0-0-p-89582bef9b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.781495 kubelet[2534]: I0113 23:48:24.781458 2534 apiserver.go:52] "Watching apiserver" Jan 13 23:48:24.793537 kubelet[2534]: I0113 23:48:24.793497 2534 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:48:24.831621 kubelet[2534]: I0113 23:48:24.831584 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:24.833673 kubelet[2534]: E0113 23:48:24.833644 2534 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4578-0-0-p-89582bef9b\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:25.133508 kubelet[2534]: I0113 23:48:25.132876 2534 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:26.453828 systemd[1]: Reload requested from client PID 2806 ('systemctl') (unit session-10.scope)... Jan 13 23:48:26.453846 systemd[1]: Reloading... Jan 13 23:48:26.525983 zram_generator::config[2852]: No configuration found. Jan 13 23:48:26.712503 systemd[1]: Reloading finished in 258 ms. Jan 13 23:48:26.742786 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:26.753133 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 23:48:26.753439 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:26.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:26.753516 systemd[1]: kubelet.service: Consumed 1.073s CPU time, 128.5M memory peak. Jan 13 23:48:26.754153 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 13 23:48:26.754210 kernel: audit: type=1131 audit(1768348106.752:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:26.755205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 23:48:26.755000 audit: BPF prog-id=113 op=LOAD Jan 13 23:48:26.757240 kernel: audit: type=1334 audit(1768348106.755:400): prog-id=113 op=LOAD Jan 13 23:48:26.757288 kernel: audit: type=1334 audit(1768348106.755:401): prog-id=114 op=LOAD Jan 13 23:48:26.755000 audit: BPF prog-id=114 op=LOAD Jan 13 23:48:26.758014 kernel: audit: type=1334 audit(1768348106.755:402): prog-id=66 op=UNLOAD Jan 13 23:48:26.755000 audit: BPF prog-id=66 op=UNLOAD Jan 13 23:48:26.755000 audit: BPF prog-id=67 op=UNLOAD Jan 13 23:48:26.759561 kernel: audit: type=1334 audit(1768348106.755:403): prog-id=67 op=UNLOAD Jan 13 23:48:26.759592 kernel: audit: type=1334 audit(1768348106.756:404): prog-id=115 op=LOAD Jan 13 23:48:26.759613 kernel: audit: type=1334 audit(1768348106.756:405): prog-id=68 op=UNLOAD Jan 13 23:48:26.759631 kernel: audit: type=1334 audit(1768348106.757:406): prog-id=116 op=LOAD Jan 13 23:48:26.759647 kernel: audit: type=1334 audit(1768348106.758:407): prog-id=117 op=LOAD Jan 13 23:48:26.759663 kernel: audit: type=1334 audit(1768348106.758:408): prog-id=69 op=UNLOAD Jan 13 23:48:26.756000 audit: BPF prog-id=115 op=LOAD Jan 13 23:48:26.756000 audit: BPF prog-id=68 op=UNLOAD Jan 13 23:48:26.757000 audit: BPF prog-id=116 op=LOAD Jan 13 23:48:26.758000 audit: BPF prog-id=117 op=LOAD Jan 13 23:48:26.758000 audit: BPF prog-id=69 op=UNLOAD Jan 13 23:48:26.758000 audit: BPF prog-id=70 op=UNLOAD Jan 13 23:48:26.759000 audit: BPF prog-id=118 op=LOAD Jan 13 23:48:26.759000 audit: BPF prog-id=63 op=UNLOAD Jan 13 23:48:26.760000 audit: BPF prog-id=119 op=LOAD Jan 13 23:48:26.761000 audit: BPF prog-id=120 op=LOAD Jan 13 23:48:26.761000 audit: BPF prog-id=64 op=UNLOAD Jan 13 23:48:26.761000 audit: BPF prog-id=65 op=UNLOAD Jan 13 23:48:26.762000 audit: BPF prog-id=121 op=LOAD Jan 13 23:48:26.762000 audit: BPF prog-id=75 op=UNLOAD Jan 13 23:48:26.763000 audit: BPF prog-id=122 op=LOAD Jan 13 23:48:26.781000 audit: BPF prog-id=82 op=UNLOAD Jan 13 23:48:26.782000 audit: BPF prog-id=123 op=LOAD Jan 13 23:48:26.782000 audit: BPF prog-id=71 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=124 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=72 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=125 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=126 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=73 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=74 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=127 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=79 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=128 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=129 op=LOAD Jan 13 23:48:26.783000 audit: BPF prog-id=80 op=UNLOAD Jan 13 23:48:26.783000 audit: BPF prog-id=81 op=UNLOAD Jan 13 23:48:26.785000 audit: BPF prog-id=130 op=LOAD Jan 13 23:48:26.785000 audit: BPF prog-id=76 op=UNLOAD Jan 13 23:48:26.785000 audit: BPF prog-id=131 op=LOAD Jan 13 23:48:26.785000 audit: BPF prog-id=132 op=LOAD Jan 13 23:48:26.785000 audit: BPF prog-id=77 op=UNLOAD Jan 13 23:48:26.785000 audit: BPF prog-id=78 op=UNLOAD Jan 13 23:48:27.151106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 23:48:27.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:27.168293 (kubelet)[2897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 23:48:27.201267 kubelet[2897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:48:27.201267 kubelet[2897]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 13 23:48:27.201267 kubelet[2897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 23:48:27.201601 kubelet[2897]: I0113 23:48:27.201323 2897 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 23:48:27.208361 kubelet[2897]: I0113 23:48:27.208319 2897 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 13 23:48:27.209993 kubelet[2897]: I0113 23:48:27.208482 2897 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 23:48:27.209993 kubelet[2897]: I0113 23:48:27.208713 2897 server.go:954] "Client rotation is on, will bootstrap in background" Jan 13 23:48:27.210197 kubelet[2897]: I0113 23:48:27.210179 2897 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 23:48:27.212561 kubelet[2897]: I0113 23:48:27.212528 2897 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 23:48:27.216633 kubelet[2897]: I0113 23:48:27.216609 2897 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 13 23:48:27.219557 kubelet[2897]: I0113 23:48:27.219519 2897 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 23:48:27.219749 kubelet[2897]: I0113 23:48:27.219715 2897 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 23:48:27.220012 kubelet[2897]: I0113 23:48:27.219743 2897 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4578-0-0-p-89582bef9b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 23:48:27.220099 kubelet[2897]: I0113 23:48:27.220024 2897 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 23:48:27.220099 kubelet[2897]: I0113 23:48:27.220034 2897 container_manager_linux.go:304] "Creating device plugin manager" Jan 13 23:48:27.220148 kubelet[2897]: I0113 23:48:27.220112 2897 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:48:27.220568 kubelet[2897]: I0113 23:48:27.220551 2897 kubelet.go:446] "Attempting to sync node with API server" Jan 13 23:48:27.221130 kubelet[2897]: I0113 23:48:27.221086 2897 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 23:48:27.221252 kubelet[2897]: I0113 23:48:27.221240 2897 kubelet.go:352] "Adding apiserver pod source" Jan 13 23:48:27.221308 kubelet[2897]: I0113 23:48:27.221300 2897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 23:48:27.222139 kubelet[2897]: I0113 23:48:27.222119 2897 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 13 23:48:27.223569 kubelet[2897]: I0113 23:48:27.223541 2897 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 23:48:27.224317 kubelet[2897]: I0113 23:48:27.224295 2897 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 13 23:48:27.224364 kubelet[2897]: I0113 23:48:27.224332 2897 server.go:1287] "Started kubelet" Jan 13 23:48:27.225969 kubelet[2897]: I0113 23:48:27.225006 2897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 23:48:27.225969 kubelet[2897]: I0113 23:48:27.225642 2897 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 23:48:27.225969 kubelet[2897]: I0113 23:48:27.225794 2897 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 23:48:27.227077 kubelet[2897]: E0113 23:48:27.227052 2897 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 23:48:27.227207 kubelet[2897]: I0113 23:48:27.227191 2897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 23:48:27.227237 kubelet[2897]: I0113 23:48:27.227206 2897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 23:48:27.227442 kubelet[2897]: I0113 23:48:27.227416 2897 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 13 23:48:27.228104 kubelet[2897]: I0113 23:48:27.228079 2897 server.go:479] "Adding debug handlers to kubelet server" Jan 13 23:48:27.230605 kubelet[2897]: I0113 23:48:27.230579 2897 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 13 23:48:27.230734 kubelet[2897]: I0113 23:48:27.230715 2897 reconciler.go:26] "Reconciler: start to sync state" Jan 13 23:48:27.231829 kubelet[2897]: E0113 23:48:27.231810 2897 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4578-0-0-p-89582bef9b\" not found" Jan 13 23:48:27.244010 kubelet[2897]: I0113 23:48:27.243550 2897 factory.go:221] Registration of the systemd container factory successfully Jan 13 23:48:27.244010 kubelet[2897]: I0113 23:48:27.243673 2897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 23:48:27.244794 kubelet[2897]: I0113 23:48:27.244756 2897 factory.go:221] Registration of the containerd container factory successfully Jan 13 23:48:27.254853 kubelet[2897]: I0113 23:48:27.254799 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 23:48:27.256679 kubelet[2897]: I0113 23:48:27.256646 2897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 23:48:27.256679 kubelet[2897]: I0113 23:48:27.256672 2897 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 13 23:48:27.256778 kubelet[2897]: I0113 23:48:27.256687 2897 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 13 23:48:27.256778 kubelet[2897]: I0113 23:48:27.256694 2897 kubelet.go:2382] "Starting kubelet main sync loop" Jan 13 23:48:27.256778 kubelet[2897]: E0113 23:48:27.256742 2897 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 23:48:27.286391 kubelet[2897]: I0113 23:48:27.286361 2897 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 13 23:48:27.286391 kubelet[2897]: I0113 23:48:27.286386 2897 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 13 23:48:27.286534 kubelet[2897]: I0113 23:48:27.286410 2897 state_mem.go:36] "Initialized new in-memory state store" Jan 13 23:48:27.286586 kubelet[2897]: I0113 23:48:27.286568 2897 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 23:48:27.286611 kubelet[2897]: I0113 23:48:27.286585 2897 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 23:48:27.286611 kubelet[2897]: I0113 23:48:27.286611 2897 policy_none.go:49] "None policy: Start" Jan 13 23:48:27.286679 kubelet[2897]: I0113 23:48:27.286620 2897 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 13 23:48:27.286679 kubelet[2897]: I0113 23:48:27.286629 2897 state_mem.go:35] "Initializing new in-memory state store" Jan 13 23:48:27.286729 kubelet[2897]: I0113 23:48:27.286723 2897 state_mem.go:75] "Updated machine memory state" Jan 13 23:48:27.291113 kubelet[2897]: I0113 23:48:27.290973 2897 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 23:48:27.291217 kubelet[2897]: I0113 23:48:27.291194 2897 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 23:48:27.291245 kubelet[2897]: I0113 23:48:27.291217 2897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 23:48:27.291451 kubelet[2897]: I0113 23:48:27.291424 2897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 23:48:27.292102 kubelet[2897]: E0113 23:48:27.292086 2897 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 13 23:48:27.357924 kubelet[2897]: I0113 23:48:27.357888 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.357924 kubelet[2897]: I0113 23:48:27.357915 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.358226 kubelet[2897]: I0113 23:48:27.357947 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.367005 kubelet[2897]: E0113 23:48:27.366903 2897 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" already exists" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.394526 kubelet[2897]: I0113 23:48:27.394480 2897 kubelet_node_status.go:75] "Attempting to register node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.403405 kubelet[2897]: I0113 23:48:27.403300 2897 kubelet_node_status.go:124] "Node was previously registered" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.403405 kubelet[2897]: I0113 23:48:27.403384 2897 kubelet_node_status.go:78] "Successfully registered node" node="ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431541 kubelet[2897]: I0113 23:48:27.431452 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-ca-certs\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431752 kubelet[2897]: I0113 23:48:27.431588 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431752 kubelet[2897]: I0113 23:48:27.431624 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-k8s-certs\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431752 kubelet[2897]: I0113 23:48:27.431642 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-kubeconfig\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431922 kubelet[2897]: I0113 23:48:27.431881 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80239fa2e2fc7b587eb25a66418e6524-k8s-certs\") pod \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" (UID: \"80239fa2e2fc7b587eb25a66418e6524\") " pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.431922 kubelet[2897]: I0113 23:48:27.431904 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-ca-certs\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.432059 kubelet[2897]: I0113 23:48:27.432047 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-flexvolume-dir\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.432181 kubelet[2897]: I0113 23:48:27.432161 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e96fd19676b59064fa40286a2554c9dc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4578-0-0-p-89582bef9b\" (UID: \"e96fd19676b59064fa40286a2554c9dc\") " pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:27.432293 kubelet[2897]: I0113 23:48:27.432266 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ba8cfed24e375d72e9165bcfbbfd5a95-kubeconfig\") pod \"kube-scheduler-ci-4578-0-0-p-89582bef9b\" (UID: \"ba8cfed24e375d72e9165bcfbbfd5a95\") " pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:28.222331 kubelet[2897]: I0113 23:48:28.222225 2897 apiserver.go:52] "Watching apiserver" Jan 13 23:48:28.231550 kubelet[2897]: I0113 23:48:28.231503 2897 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 13 23:48:28.272261 kubelet[2897]: I0113 23:48:28.272038 2897 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:28.278093 kubelet[2897]: E0113 23:48:28.278060 2897 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4578-0-0-p-89582bef9b\" already exists" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" Jan 13 23:48:28.289785 kubelet[2897]: I0113 23:48:28.289654 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4578-0-0-p-89582bef9b" podStartSLOduration=3.289639957 podStartE2EDuration="3.289639957s" podCreationTimestamp="2026-01-13 23:48:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:48:28.289626717 +0000 UTC m=+1.117968929" watchObservedRunningTime="2026-01-13 23:48:28.289639957 +0000 UTC m=+1.117982169" Jan 13 23:48:28.307363 kubelet[2897]: I0113 23:48:28.307230 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4578-0-0-p-89582bef9b" podStartSLOduration=1.307213802 podStartE2EDuration="1.307213802s" podCreationTimestamp="2026-01-13 23:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:48:28.298259039 +0000 UTC m=+1.126601251" watchObservedRunningTime="2026-01-13 23:48:28.307213802 +0000 UTC m=+1.135556014" Jan 13 23:48:28.307363 kubelet[2897]: I0113 23:48:28.307329 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4578-0-0-p-89582bef9b" podStartSLOduration=1.3073245629999999 podStartE2EDuration="1.307324563s" podCreationTimestamp="2026-01-13 23:48:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:48:28.307175762 +0000 UTC m=+1.135517974" watchObservedRunningTime="2026-01-13 23:48:28.307324563 +0000 UTC m=+1.135666735" Jan 13 23:48:29.509638 update_engine[1659]: I20260113 23:48:29.509559 1659 update_attempter.cc:509] Updating boot flags... Jan 13 23:48:32.976658 kubelet[2897]: I0113 23:48:32.976509 2897 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 23:48:32.977023 containerd[1676]: time="2026-01-13T23:48:32.976786415Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 23:48:32.977223 kubelet[2897]: I0113 23:48:32.977037 2897 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 23:48:33.707127 systemd[1]: Created slice kubepods-besteffort-podd304c773_7d17_4303_a714_ff21983c5010.slice - libcontainer container kubepods-besteffort-podd304c773_7d17_4303_a714_ff21983c5010.slice. Jan 13 23:48:33.770363 kubelet[2897]: I0113 23:48:33.770311 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d304c773-7d17-4303-a714-ff21983c5010-xtables-lock\") pod \"kube-proxy-jmnp6\" (UID: \"d304c773-7d17-4303-a714-ff21983c5010\") " pod="kube-system/kube-proxy-jmnp6" Jan 13 23:48:33.770363 kubelet[2897]: I0113 23:48:33.770360 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9242h\" (UniqueName: \"kubernetes.io/projected/d304c773-7d17-4303-a714-ff21983c5010-kube-api-access-9242h\") pod \"kube-proxy-jmnp6\" (UID: \"d304c773-7d17-4303-a714-ff21983c5010\") " pod="kube-system/kube-proxy-jmnp6" Jan 13 23:48:33.770638 kubelet[2897]: I0113 23:48:33.770385 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d304c773-7d17-4303-a714-ff21983c5010-kube-proxy\") pod \"kube-proxy-jmnp6\" (UID: \"d304c773-7d17-4303-a714-ff21983c5010\") " pod="kube-system/kube-proxy-jmnp6" Jan 13 23:48:33.770638 kubelet[2897]: I0113 23:48:33.770403 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d304c773-7d17-4303-a714-ff21983c5010-lib-modules\") pod \"kube-proxy-jmnp6\" (UID: \"d304c773-7d17-4303-a714-ff21983c5010\") " pod="kube-system/kube-proxy-jmnp6" Jan 13 23:48:33.878859 kubelet[2897]: E0113 23:48:33.878819 2897 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 23:48:33.878859 kubelet[2897]: E0113 23:48:33.878853 2897 projected.go:194] Error preparing data for projected volume kube-api-access-9242h for pod kube-system/kube-proxy-jmnp6: configmap "kube-root-ca.crt" not found Jan 13 23:48:33.878859 kubelet[2897]: E0113 23:48:33.878917 2897 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d304c773-7d17-4303-a714-ff21983c5010-kube-api-access-9242h podName:d304c773-7d17-4303-a714-ff21983c5010 nodeName:}" failed. No retries permitted until 2026-01-13 23:48:34.378896834 +0000 UTC m=+7.207239046 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9242h" (UniqueName: "kubernetes.io/projected/d304c773-7d17-4303-a714-ff21983c5010-kube-api-access-9242h") pod "kube-proxy-jmnp6" (UID: "d304c773-7d17-4303-a714-ff21983c5010") : configmap "kube-root-ca.crt" not found Jan 13 23:48:34.078071 kubelet[2897]: W0113 23:48:34.077366 2897 reflector.go:569] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4578-0-0-p-89582bef9b" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4578-0-0-p-89582bef9b' and this object Jan 13 23:48:34.078071 kubelet[2897]: E0113 23:48:34.077423 2897 reflector.go:166] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4578-0-0-p-89582bef9b\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4578-0-0-p-89582bef9b' and this object" logger="UnhandledError" Jan 13 23:48:34.084619 systemd[1]: Created slice kubepods-besteffort-pod4538bee8_9489_499a_92b6_7e414f317155.slice - libcontainer container kubepods-besteffort-pod4538bee8_9489_499a_92b6_7e414f317155.slice. Jan 13 23:48:34.173326 kubelet[2897]: I0113 23:48:34.173243 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rrx\" (UniqueName: \"kubernetes.io/projected/4538bee8-9489-499a-92b6-7e414f317155-kube-api-access-n8rrx\") pod \"tigera-operator-7dcd859c48-8b2lc\" (UID: \"4538bee8-9489-499a-92b6-7e414f317155\") " pod="tigera-operator/tigera-operator-7dcd859c48-8b2lc" Jan 13 23:48:34.173326 kubelet[2897]: I0113 23:48:34.173296 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4538bee8-9489-499a-92b6-7e414f317155-var-lib-calico\") pod \"tigera-operator-7dcd859c48-8b2lc\" (UID: \"4538bee8-9489-499a-92b6-7e414f317155\") " pod="tigera-operator/tigera-operator-7dcd859c48-8b2lc" Jan 13 23:48:34.389609 containerd[1676]: time="2026-01-13T23:48:34.389479946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8b2lc,Uid:4538bee8-9489-499a-92b6-7e414f317155,Namespace:tigera-operator,Attempt:0,}" Jan 13 23:48:34.409158 containerd[1676]: time="2026-01-13T23:48:34.409112921Z" level=info msg="connecting to shim ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4" address="unix:///run/containerd/s/9c6468b24e89e1d26cb51786e3f4eea1a7897a900d52ae0123ad134344d813dc" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:34.437417 systemd[1]: Started cri-containerd-ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4.scope - libcontainer container ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4. Jan 13 23:48:34.445000 audit: BPF prog-id=133 op=LOAD Jan 13 23:48:34.447470 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 13 23:48:34.447531 kernel: audit: type=1334 audit(1768348114.445:441): prog-id=133 op=LOAD Jan 13 23:48:34.446000 audit: BPF prog-id=134 op=LOAD Jan 13 23:48:34.448982 kernel: audit: type=1334 audit(1768348114.446:442): prog-id=134 op=LOAD Jan 13 23:48:34.449046 kernel: audit: type=1300 audit(1768348114.446:442): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f0180 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.446000 audit[2980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f0180 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.455287 kernel: audit: type=1327 audit(1768348114.446:442): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.455329 kernel: audit: type=1334 audit(1768348114.447:443): prog-id=134 op=UNLOAD Jan 13 23:48:34.447000 audit: BPF prog-id=134 op=UNLOAD Jan 13 23:48:34.447000 audit[2980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.458937 kernel: audit: type=1300 audit(1768348114.447:443): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.459071 kernel: audit: type=1327 audit(1768348114.447:443): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.447000 audit: BPF prog-id=135 op=LOAD Jan 13 23:48:34.462381 kernel: audit: type=1334 audit(1768348114.447:444): prog-id=135 op=LOAD Jan 13 23:48:34.462425 kernel: audit: type=1300 audit(1768348114.447:444): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f03e8 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.447000 audit[2980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f03e8 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.467799 kernel: audit: type=1327 audit(1768348114.447:444): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.448000 audit: BPF prog-id=136 op=LOAD Jan 13 23:48:34.448000 audit[2980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001f0168 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.451000 audit: BPF prog-id=136 op=UNLOAD Jan 13 23:48:34.451000 audit[2980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.451000 audit: BPF prog-id=135 op=UNLOAD Jan 13 23:48:34.451000 audit[2980]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.451000 audit: BPF prog-id=137 op=LOAD Jan 13 23:48:34.451000 audit[2980]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f0648 a2=98 a3=0 items=0 ppid=2970 pid=2980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6163383631343833343333666564393038393439386331666131636261 Jan 13 23:48:34.488358 containerd[1676]: time="2026-01-13T23:48:34.488303584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-8b2lc,Uid:4538bee8-9489-499a-92b6-7e414f317155,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4\"" Jan 13 23:48:34.490271 containerd[1676]: time="2026-01-13T23:48:34.490227234Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 13 23:48:34.623852 containerd[1676]: time="2026-01-13T23:48:34.623814520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmnp6,Uid:d304c773-7d17-4303-a714-ff21983c5010,Namespace:kube-system,Attempt:0,}" Jan 13 23:48:34.643874 containerd[1676]: time="2026-01-13T23:48:34.643763817Z" level=info msg="connecting to shim a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a" address="unix:///run/containerd/s/fdea3ee465ce5e424a59971b9b23e1c54fd006f4b94bfb647b9a47173c46b28d" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:34.674474 systemd[1]: Started cri-containerd-a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a.scope - libcontainer container a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a. Jan 13 23:48:34.682000 audit: BPF prog-id=138 op=LOAD Jan 13 23:48:34.682000 audit: BPF prog-id=139 op=LOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=139 op=UNLOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=140 op=LOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=141 op=LOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=141 op=UNLOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=140 op=UNLOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.682000 audit: BPF prog-id=142 op=LOAD Jan 13 23:48:34.682000 audit[3026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3015 pid=3026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.682000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383130613338353063363062643437376434383564316634333366 Jan 13 23:48:34.697925 containerd[1676]: time="2026-01-13T23:48:34.697889039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmnp6,Uid:d304c773-7d17-4303-a714-ff21983c5010,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a\"" Jan 13 23:48:34.700685 containerd[1676]: time="2026-01-13T23:48:34.700649732Z" level=info msg="CreateContainer within sandbox \"a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 23:48:34.709282 containerd[1676]: time="2026-01-13T23:48:34.709194614Z" level=info msg="Container 216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:34.717437 containerd[1676]: time="2026-01-13T23:48:34.717378933Z" level=info msg="CreateContainer within sandbox \"a0810a3850c60bd477d485d1f433f6f8dd35bb322a85b452aa32a754ee83542a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd\"" Jan 13 23:48:34.718260 containerd[1676]: time="2026-01-13T23:48:34.717976936Z" level=info msg="StartContainer for \"216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd\"" Jan 13 23:48:34.721731 containerd[1676]: time="2026-01-13T23:48:34.721668034Z" level=info msg="connecting to shim 216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd" address="unix:///run/containerd/s/fdea3ee465ce5e424a59971b9b23e1c54fd006f4b94bfb647b9a47173c46b28d" protocol=ttrpc version=3 Jan 13 23:48:34.742229 systemd[1]: Started cri-containerd-216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd.scope - libcontainer container 216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd. Jan 13 23:48:34.794000 audit: BPF prog-id=143 op=LOAD Jan 13 23:48:34.794000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=3015 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366630626132306233613735386236336564663064326565646133 Jan 13 23:48:34.794000 audit: BPF prog-id=144 op=LOAD Jan 13 23:48:34.794000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=3015 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366630626132306233613735386236336564663064326565646133 Jan 13 23:48:34.794000 audit: BPF prog-id=144 op=UNLOAD Jan 13 23:48:34.794000 audit[3051]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366630626132306233613735386236336564663064326565646133 Jan 13 23:48:34.794000 audit: BPF prog-id=143 op=UNLOAD Jan 13 23:48:34.794000 audit[3051]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3015 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366630626132306233613735386236336564663064326565646133 Jan 13 23:48:34.794000 audit: BPF prog-id=145 op=LOAD Jan 13 23:48:34.794000 audit[3051]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=3015 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3231366630626132306233613735386236336564663064326565646133 Jan 13 23:48:34.817794 containerd[1676]: time="2026-01-13T23:48:34.817757579Z" level=info msg="StartContainer for \"216f0ba20b3a758b63edf0d2eeda3f4b585987939bf9d8764f881f97a9ba5ffd\" returns successfully" Jan 13 23:48:34.965000 audit[3117]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3117 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:34.965000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5ca2570 a2=0 a3=1 items=0 ppid=3065 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.965000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:48:34.965000 audit[3118]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:34.965000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe3b2e10 a2=0 a3=1 items=0 ppid=3065 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.965000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 13 23:48:34.966000 audit[3119]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:34.966000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff7fa9f0 a2=0 a3=1 items=0 ppid=3065 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:48:34.968000 audit[3120]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:34.968000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeef67330 a2=0 a3=1 items=0 ppid=3065 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.968000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:48:34.968000 audit[3121]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:34.968000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa3321c0 a2=0 a3=1 items=0 ppid=3065 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 13 23:48:34.969000 audit[3122]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:34.969000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9f43100 a2=0 a3=1 items=0 ppid=3065 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:34.969000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 13 23:48:35.069000 audit[3123]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.069000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffe76bb80 a2=0 a3=1 items=0 ppid=3065 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:48:35.071000 audit[3125]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3125 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.071000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc78b0600 a2=0 a3=1 items=0 ppid=3065 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.071000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 13 23:48:35.075000 audit[3128]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.075000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd1b7f8a0 a2=0 a3=1 items=0 ppid=3065 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 13 23:48:35.076000 audit[3129]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.076000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdde66970 a2=0 a3=1 items=0 ppid=3065 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:48:35.079000 audit[3131]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.079000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8af6920 a2=0 a3=1 items=0 ppid=3065 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:48:35.080000 audit[3132]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.080000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe135ab0 a2=0 a3=1 items=0 ppid=3065 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:48:35.083000 audit[3134]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.083000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc359510 a2=0 a3=1 items=0 ppid=3065 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:48:35.087000 audit[3137]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.087000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd00cc5b0 a2=0 a3=1 items=0 ppid=3065 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 13 23:48:35.088000 audit[3138]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.088000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6fcb710 a2=0 a3=1 items=0 ppid=3065 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:48:35.090000 audit[3140]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.090000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd11454f0 a2=0 a3=1 items=0 ppid=3065 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:48:35.091000 audit[3141]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3141 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.091000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe604b680 a2=0 a3=1 items=0 ppid=3065 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:48:35.094000 audit[3143]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.094000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff32e0850 a2=0 a3=1 items=0 ppid=3065 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:48:35.097000 audit[3146]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.097000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffcd9aa30 a2=0 a3=1 items=0 ppid=3065 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.097000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:48:35.101000 audit[3149]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.101000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffaeac260 a2=0 a3=1 items=0 ppid=3065 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.101000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:48:35.103000 audit[3150]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3150 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.103000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe13140f0 a2=0 a3=1 items=0 ppid=3065 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:48:35.105000 audit[3152]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.105000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc57dcd30 a2=0 a3=1 items=0 ppid=3065 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:48:35.108000 audit[3155]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.108000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce7bab60 a2=0 a3=1 items=0 ppid=3065 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:48:35.110000 audit[3156]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.110000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb57fe90 a2=0 a3=1 items=0 ppid=3065 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.110000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:48:35.112000 audit[3158]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 13 23:48:35.112000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffbf3b110 a2=0 a3=1 items=0 ppid=3065 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:48:35.133000 audit[3164]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:35.133000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe45933f0 a2=0 a3=1 items=0 ppid=3065 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:35.142000 audit[3164]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:35.142000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe45933f0 a2=0 a3=1 items=0 ppid=3065 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:35.144000 audit[3169]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3169 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.144000 audit[3169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe057b3e0 a2=0 a3=1 items=0 ppid=3065 pid=3169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 13 23:48:35.146000 audit[3171]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.146000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe4c23860 a2=0 a3=1 items=0 ppid=3065 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 13 23:48:35.150000 audit[3174]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3174 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.150000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc435ca20 a2=0 a3=1 items=0 ppid=3065 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 13 23:48:35.151000 audit[3175]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.151000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffac461d0 a2=0 a3=1 items=0 ppid=3065 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 13 23:48:35.154000 audit[3177]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.154000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff17c3d30 a2=0 a3=1 items=0 ppid=3065 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.154000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 13 23:48:35.155000 audit[3178]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3178 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.155000 audit[3178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefe5c470 a2=0 a3=1 items=0 ppid=3065 pid=3178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 13 23:48:35.158000 audit[3180]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.158000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdb069440 a2=0 a3=1 items=0 ppid=3065 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 13 23:48:35.161000 audit[3183]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.161000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffff6f91b30 a2=0 a3=1 items=0 ppid=3065 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.161000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 13 23:48:35.162000 audit[3184]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.162000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5527b10 a2=0 a3=1 items=0 ppid=3065 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 13 23:48:35.165000 audit[3186]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.165000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc4b07820 a2=0 a3=1 items=0 ppid=3065 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 13 23:48:35.166000 audit[3187]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3187 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.166000 audit[3187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe04cb160 a2=0 a3=1 items=0 ppid=3065 pid=3187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.166000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 13 23:48:35.168000 audit[3189]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.168000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5a82930 a2=0 a3=1 items=0 ppid=3065 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.168000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 13 23:48:35.172000 audit[3192]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.172000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6396440 a2=0 a3=1 items=0 ppid=3065 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.172000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 13 23:48:35.176000 audit[3195]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.176000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcd62c9c0 a2=0 a3=1 items=0 ppid=3065 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 13 23:48:35.177000 audit[3196]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3196 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.177000 audit[3196]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8750670 a2=0 a3=1 items=0 ppid=3065 pid=3196 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.177000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 13 23:48:35.179000 audit[3198]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.179000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc246c890 a2=0 a3=1 items=0 ppid=3065 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:48:35.183000 audit[3201]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.183000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc56c77e0 a2=0 a3=1 items=0 ppid=3065 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 13 23:48:35.184000 audit[3202]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.184000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb440e60 a2=0 a3=1 items=0 ppid=3065 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.184000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 13 23:48:35.187000 audit[3204]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.187000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffa2b5fa0 a2=0 a3=1 items=0 ppid=3065 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 13 23:48:35.188000 audit[3205]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3205 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.188000 audit[3205]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe56d8060 a2=0 a3=1 items=0 ppid=3065 pid=3205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 13 23:48:35.190000 audit[3207]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.190000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffee8baaa0 a2=0 a3=1 items=0 ppid=3065 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.190000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:48:35.193000 audit[3210]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 13 23:48:35.193000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb98d380 a2=0 a3=1 items=0 ppid=3065 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 13 23:48:35.196000 audit[3212]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:48:35.196000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffea0d3230 a2=0 a3=1 items=0 ppid=3065 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.196000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:35.197000 audit[3212]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 13 23:48:35.197000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffea0d3230 a2=0 a3=1 items=0 ppid=3065 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:35.197000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:36.214927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount895447185.mount: Deactivated successfully. Jan 13 23:48:36.505472 containerd[1676]: time="2026-01-13T23:48:36.505419271Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:36.506576 containerd[1676]: time="2026-01-13T23:48:36.506356715Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=0" Jan 13 23:48:36.507410 containerd[1676]: time="2026-01-13T23:48:36.507371040Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:36.509643 containerd[1676]: time="2026-01-13T23:48:36.509598571Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:36.510381 containerd[1676]: time="2026-01-13T23:48:36.510351455Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.020071621s" Jan 13 23:48:36.510464 containerd[1676]: time="2026-01-13T23:48:36.510446375Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 13 23:48:36.512671 containerd[1676]: time="2026-01-13T23:48:36.512623986Z" level=info msg="CreateContainer within sandbox \"ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 13 23:48:36.523414 containerd[1676]: time="2026-01-13T23:48:36.523362558Z" level=info msg="Container faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:36.523763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount861333224.mount: Deactivated successfully. Jan 13 23:48:36.529670 containerd[1676]: time="2026-01-13T23:48:36.529612428Z" level=info msg="CreateContainer within sandbox \"ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac\"" Jan 13 23:48:36.530511 containerd[1676]: time="2026-01-13T23:48:36.530459152Z" level=info msg="StartContainer for \"faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac\"" Jan 13 23:48:36.532983 containerd[1676]: time="2026-01-13T23:48:36.531936599Z" level=info msg="connecting to shim faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac" address="unix:///run/containerd/s/9c6468b24e89e1d26cb51786e3f4eea1a7897a900d52ae0123ad134344d813dc" protocol=ttrpc version=3 Jan 13 23:48:36.551268 systemd[1]: Started cri-containerd-faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac.scope - libcontainer container faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac. Jan 13 23:48:36.559000 audit: BPF prog-id=146 op=LOAD Jan 13 23:48:36.560000 audit: BPF prog-id=147 op=LOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=147 op=UNLOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=148 op=LOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=149 op=LOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=149 op=UNLOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=148 op=UNLOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.560000 audit: BPF prog-id=150 op=LOAD Jan 13 23:48:36.560000 audit[3221]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2970 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:36.560000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661663630306663316430623139373861326230306261336538386639 Jan 13 23:48:36.576730 containerd[1676]: time="2026-01-13T23:48:36.576691096Z" level=info msg="StartContainer for \"faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac\" returns successfully" Jan 13 23:48:37.303643 kubelet[2897]: I0113 23:48:37.303554 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jmnp6" podStartSLOduration=4.303535495 podStartE2EDuration="4.303535495s" podCreationTimestamp="2026-01-13 23:48:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:48:35.297592903 +0000 UTC m=+8.125935155" watchObservedRunningTime="2026-01-13 23:48:37.303535495 +0000 UTC m=+10.131877707" Jan 13 23:48:40.509137 kubelet[2897]: I0113 23:48:40.508440 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-8b2lc" podStartSLOduration=4.48672615 podStartE2EDuration="6.508420859s" podCreationTimestamp="2026-01-13 23:48:34 +0000 UTC" firstStartedPulling="2026-01-13 23:48:34.489659471 +0000 UTC m=+7.318001683" lastFinishedPulling="2026-01-13 23:48:36.51135418 +0000 UTC m=+9.339696392" observedRunningTime="2026-01-13 23:48:37.304110618 +0000 UTC m=+10.132452830" watchObservedRunningTime="2026-01-13 23:48:40.508420859 +0000 UTC m=+13.336763071" Jan 13 23:48:41.616990 sudo[1955]: pam_unix(sudo:session): session closed for user root Jan 13 23:48:41.620073 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 13 23:48:41.620114 kernel: audit: type=1106 audit(1768348121.616:521): pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.616000 audit[1955]: USER_END pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.616000 audit[1955]: CRED_DISP pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.622529 kernel: audit: type=1104 audit(1768348121.616:522): pid=1955 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.713972 sshd[1954]: Connection closed by 4.153.228.146 port 42730 Jan 13 23:48:41.714310 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Jan 13 23:48:41.716000 audit[1950]: USER_END pid=1950 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:41.719984 systemd[1]: sshd@8-10.0.15.225:22-4.153.228.146:42730.service: Deactivated successfully. Jan 13 23:48:41.716000 audit[1950]: CRED_DISP pid=1950 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:41.725296 kernel: audit: type=1106 audit(1768348121.716:523): pid=1950 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:41.725366 kernel: audit: type=1104 audit(1768348121.716:524): pid=1950 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:48:41.723406 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 23:48:41.723683 systemd[1]: session-10.scope: Consumed 6.207s CPU time, 221.3M memory peak. Jan 13 23:48:41.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.15.225:22-4.153.228.146:42730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.728341 kernel: audit: type=1131 audit(1768348121.720:525): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.15.225:22-4.153.228.146:42730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:48:41.726843 systemd-logind[1653]: Session 10 logged out. Waiting for processes to exit. Jan 13 23:48:41.727867 systemd-logind[1653]: Removed session 10. Jan 13 23:48:43.229000 audit[3313]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.229000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc82bc0b0 a2=0 a3=1 items=0 ppid=3065 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.237404 kernel: audit: type=1325 audit(1768348123.229:526): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.237471 kernel: audit: type=1300 audit(1768348123.229:526): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc82bc0b0 a2=0 a3=1 items=0 ppid=3065 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.229000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:43.239451 kernel: audit: type=1327 audit(1768348123.229:526): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:43.239000 audit[3313]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.242444 kernel: audit: type=1325 audit(1768348123.239:527): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3313 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.239000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc82bc0b0 a2=0 a3=1 items=0 ppid=3065 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.245962 kernel: audit: type=1300 audit(1768348123.239:527): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc82bc0b0 a2=0 a3=1 items=0 ppid=3065 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:43.251000 audit[3315]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.251000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe5eebe60 a2=0 a3=1 items=0 ppid=3065 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.251000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:43.255000 audit[3315]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:43.255000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5eebe60 a2=0 a3=1 items=0 ppid=3065 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:43.255000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.831000 audit[3317]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.835087 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 13 23:48:46.835181 kernel: audit: type=1325 audit(1768348126.831:530): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.831000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe81e7510 a2=0 a3=1 items=0 ppid=3065 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.839803 kernel: audit: type=1300 audit(1768348126.831:530): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe81e7510 a2=0 a3=1 items=0 ppid=3065 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.839877 kernel: audit: type=1327 audit(1768348126.831:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.842000 audit[3317]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.842000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe81e7510 a2=0 a3=1 items=0 ppid=3065 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.849047 kernel: audit: type=1325 audit(1768348126.842:531): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.849126 kernel: audit: type=1300 audit(1768348126.842:531): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe81e7510 a2=0 a3=1 items=0 ppid=3065 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.852027 kernel: audit: type=1327 audit(1768348126.842:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.870000 audit[3319]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.870000 audit[3319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe50244f0 a2=0 a3=1 items=0 ppid=3065 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.877080 kernel: audit: type=1325 audit(1768348126.870:532): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.877351 kernel: audit: type=1300 audit(1768348126.870:532): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe50244f0 a2=0 a3=1 items=0 ppid=3065 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.877380 kernel: audit: type=1327 audit(1768348126.870:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.879000 audit[3319]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:46.879000 audit[3319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe50244f0 a2=0 a3=1 items=0 ppid=3065 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:46.879000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:46.883993 kernel: audit: type=1325 audit(1768348126.879:533): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:47.892000 audit[3322]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:47.892000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff92b7190 a2=0 a3=1 items=0 ppid=3065 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:47.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:47.907000 audit[3322]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3322 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:47.907000 audit[3322]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff92b7190 a2=0 a3=1 items=0 ppid=3065 pid=3322 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:47.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:49.492000 audit[3324]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:49.492000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd2c11010 a2=0 a3=1 items=0 ppid=3065 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:49.502000 audit[3324]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:49.502000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd2c11010 a2=0 a3=1 items=0 ppid=3065 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:49.522726 systemd[1]: Created slice kubepods-besteffort-pod44273f8f_6e3f_4c1b_953b_5964118a9d30.slice - libcontainer container kubepods-besteffort-pod44273f8f_6e3f_4c1b_953b_5964118a9d30.slice. Jan 13 23:48:49.575384 kubelet[2897]: I0113 23:48:49.575331 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44273f8f-6e3f-4c1b-953b-5964118a9d30-tigera-ca-bundle\") pod \"calico-typha-6b6c97f4cc-vj9rv\" (UID: \"44273f8f-6e3f-4c1b-953b-5964118a9d30\") " pod="calico-system/calico-typha-6b6c97f4cc-vj9rv" Jan 13 23:48:49.575835 kubelet[2897]: I0113 23:48:49.575489 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwlxp\" (UniqueName: \"kubernetes.io/projected/44273f8f-6e3f-4c1b-953b-5964118a9d30-kube-api-access-wwlxp\") pod \"calico-typha-6b6c97f4cc-vj9rv\" (UID: \"44273f8f-6e3f-4c1b-953b-5964118a9d30\") " pod="calico-system/calico-typha-6b6c97f4cc-vj9rv" Jan 13 23:48:49.575835 kubelet[2897]: I0113 23:48:49.575516 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44273f8f-6e3f-4c1b-953b-5964118a9d30-typha-certs\") pod \"calico-typha-6b6c97f4cc-vj9rv\" (UID: \"44273f8f-6e3f-4c1b-953b-5964118a9d30\") " pod="calico-system/calico-typha-6b6c97f4cc-vj9rv" Jan 13 23:48:49.718631 systemd[1]: Created slice kubepods-besteffort-pod508bd326_ca54_4936_834a_a9729deed647.slice - libcontainer container kubepods-besteffort-pod508bd326_ca54_4936_834a_a9729deed647.slice. Jan 13 23:48:49.776879 kubelet[2897]: I0113 23:48:49.776647 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-var-run-calico\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.776879 kubelet[2897]: I0113 23:48:49.776706 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/508bd326-ca54-4936-834a-a9729deed647-node-certs\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.776879 kubelet[2897]: I0113 23:48:49.776724 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/508bd326-ca54-4936-834a-a9729deed647-tigera-ca-bundle\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.776879 kubelet[2897]: I0113 23:48:49.776741 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-cni-log-dir\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.776879 kubelet[2897]: I0113 23:48:49.776757 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-lib-modules\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777297 kubelet[2897]: I0113 23:48:49.776774 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-xtables-lock\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777297 kubelet[2897]: I0113 23:48:49.776791 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-policysync\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777297 kubelet[2897]: I0113 23:48:49.776806 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-var-lib-calico\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777297 kubelet[2897]: I0113 23:48:49.776822 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-flexvol-driver-host\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777297 kubelet[2897]: I0113 23:48:49.776841 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbjlm\" (UniqueName: \"kubernetes.io/projected/508bd326-ca54-4936-834a-a9729deed647-kube-api-access-bbjlm\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777409 kubelet[2897]: I0113 23:48:49.776856 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-cni-bin-dir\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.777409 kubelet[2897]: I0113 23:48:49.776870 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/508bd326-ca54-4936-834a-a9729deed647-cni-net-dir\") pod \"calico-node-7wwq4\" (UID: \"508bd326-ca54-4936-834a-a9729deed647\") " pod="calico-system/calico-node-7wwq4" Jan 13 23:48:49.827363 containerd[1676]: time="2026-01-13T23:48:49.827315917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6c97f4cc-vj9rv,Uid:44273f8f-6e3f-4c1b-953b-5964118a9d30,Namespace:calico-system,Attempt:0,}" Jan 13 23:48:49.855167 containerd[1676]: time="2026-01-13T23:48:49.855088416Z" level=info msg="connecting to shim e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a" address="unix:///run/containerd/s/13d8a80e52c1779ac4ce3aba1a121230dddd3bfb19081883be9a29934d371a03" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:49.879842 kubelet[2897]: E0113 23:48:49.879808 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.879842 kubelet[2897]: W0113 23:48:49.879834 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.880007 kubelet[2897]: E0113 23:48:49.879869 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.880581 kubelet[2897]: E0113 23:48:49.880559 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.880581 kubelet[2897]: W0113 23:48:49.880580 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.880693 kubelet[2897]: E0113 23:48:49.880598 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.880649 systemd[1]: Started cri-containerd-e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a.scope - libcontainer container e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a. Jan 13 23:48:49.881556 kubelet[2897]: E0113 23:48:49.881445 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.881556 kubelet[2897]: W0113 23:48:49.881470 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.881678 kubelet[2897]: E0113 23:48:49.881517 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.882039 kubelet[2897]: E0113 23:48:49.881997 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.882039 kubelet[2897]: W0113 23:48:49.882018 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.882122 kubelet[2897]: E0113 23:48:49.882074 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.882822 kubelet[2897]: E0113 23:48:49.882795 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.882822 kubelet[2897]: W0113 23:48:49.882816 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.882911 kubelet[2897]: E0113 23:48:49.882830 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.887172 kubelet[2897]: E0113 23:48:49.887137 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.887172 kubelet[2897]: W0113 23:48:49.887165 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.887282 kubelet[2897]: E0113 23:48:49.887185 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.893324 kubelet[2897]: E0113 23:48:49.893300 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.893324 kubelet[2897]: W0113 23:48:49.893320 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.893451 kubelet[2897]: E0113 23:48:49.893337 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.900000 audit: BPF prog-id=151 op=LOAD Jan 13 23:48:49.900000 audit: BPF prog-id=152 op=LOAD Jan 13 23:48:49.900000 audit[3349]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.900000 audit: BPF prog-id=152 op=UNLOAD Jan 13 23:48:49.900000 audit[3349]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.900000 audit: BPF prog-id=153 op=LOAD Jan 13 23:48:49.900000 audit[3349]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.900000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.901000 audit: BPF prog-id=154 op=LOAD Jan 13 23:48:49.901000 audit[3349]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.901000 audit: BPF prog-id=154 op=UNLOAD Jan 13 23:48:49.901000 audit[3349]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.901000 audit: BPF prog-id=153 op=UNLOAD Jan 13 23:48:49.901000 audit[3349]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.901000 audit: BPF prog-id=155 op=LOAD Jan 13 23:48:49.901000 audit[3349]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3336 pid=3349 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:49.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536313065303836376166343738396465373339646266313630343464 Jan 13 23:48:49.908210 kubelet[2897]: E0113 23:48:49.908045 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:49.937331 containerd[1676]: time="2026-01-13T23:48:49.937282467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b6c97f4cc-vj9rv,Uid:44273f8f-6e3f-4c1b-953b-5964118a9d30,Namespace:calico-system,Attempt:0,} returns sandbox id \"e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a\"" Jan 13 23:48:49.938932 containerd[1676]: time="2026-01-13T23:48:49.938892755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 13 23:48:49.962946 kubelet[2897]: E0113 23:48:49.962857 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.962946 kubelet[2897]: W0113 23:48:49.962882 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.962946 kubelet[2897]: E0113 23:48:49.962901 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.963379 kubelet[2897]: E0113 23:48:49.963279 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.963379 kubelet[2897]: W0113 23:48:49.963292 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.963379 kubelet[2897]: E0113 23:48:49.963338 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.963671 kubelet[2897]: E0113 23:48:49.963611 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.963671 kubelet[2897]: W0113 23:48:49.963623 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.963671 kubelet[2897]: E0113 23:48:49.963634 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.963979 kubelet[2897]: E0113 23:48:49.963893 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.963979 kubelet[2897]: W0113 23:48:49.963904 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.963979 kubelet[2897]: E0113 23:48:49.963914 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.964231 kubelet[2897]: E0113 23:48:49.964219 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.964289 kubelet[2897]: W0113 23:48:49.964279 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.964335 kubelet[2897]: E0113 23:48:49.964326 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.964575 kubelet[2897]: E0113 23:48:49.964522 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.964575 kubelet[2897]: W0113 23:48:49.964536 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.964575 kubelet[2897]: E0113 23:48:49.964545 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.964856 kubelet[2897]: E0113 23:48:49.964800 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.964856 kubelet[2897]: W0113 23:48:49.964812 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.964856 kubelet[2897]: E0113 23:48:49.964821 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.965131 kubelet[2897]: E0113 23:48:49.965118 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.965193 kubelet[2897]: W0113 23:48:49.965183 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.965279 kubelet[2897]: E0113 23:48:49.965230 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.965484 kubelet[2897]: E0113 23:48:49.965472 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.965599 kubelet[2897]: W0113 23:48:49.965542 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.965599 kubelet[2897]: E0113 23:48:49.965557 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.965783 kubelet[2897]: E0113 23:48:49.965768 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.965783 kubelet[2897]: W0113 23:48:49.965781 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.965783 kubelet[2897]: E0113 23:48:49.965791 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.965969 kubelet[2897]: E0113 23:48:49.965944 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.965969 kubelet[2897]: W0113 23:48:49.965967 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966028 kubelet[2897]: E0113 23:48:49.965977 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966134 kubelet[2897]: E0113 23:48:49.966108 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.966134 kubelet[2897]: W0113 23:48:49.966121 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966134 kubelet[2897]: E0113 23:48:49.966128 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966308 kubelet[2897]: E0113 23:48:49.966268 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.966308 kubelet[2897]: W0113 23:48:49.966275 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966308 kubelet[2897]: E0113 23:48:49.966283 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966437 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.966911 kubelet[2897]: W0113 23:48:49.966444 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966451 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966582 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.966911 kubelet[2897]: W0113 23:48:49.966589 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966596 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966736 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.966911 kubelet[2897]: W0113 23:48:49.966743 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966750 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.966911 kubelet[2897]: E0113 23:48:49.966912 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.967154 kubelet[2897]: W0113 23:48:49.966922 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.967154 kubelet[2897]: E0113 23:48:49.966929 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.967154 kubelet[2897]: E0113 23:48:49.967086 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.967154 kubelet[2897]: W0113 23:48:49.967094 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.967154 kubelet[2897]: E0113 23:48:49.967101 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.967248 kubelet[2897]: E0113 23:48:49.967230 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.967248 kubelet[2897]: W0113 23:48:49.967237 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.967248 kubelet[2897]: E0113 23:48:49.967244 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.967839 kubelet[2897]: E0113 23:48:49.967367 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.967839 kubelet[2897]: W0113 23:48:49.967373 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.967839 kubelet[2897]: E0113 23:48:49.967380 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.978825 kubelet[2897]: E0113 23:48:49.978802 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.978825 kubelet[2897]: W0113 23:48:49.978820 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.978825 kubelet[2897]: E0113 23:48:49.978835 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.979006 kubelet[2897]: I0113 23:48:49.978861 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5-registration-dir\") pod \"csi-node-driver-8bgtx\" (UID: \"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5\") " pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:48:49.979035 kubelet[2897]: E0113 23:48:49.979017 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.979035 kubelet[2897]: W0113 23:48:49.979028 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.979087 kubelet[2897]: E0113 23:48:49.979044 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.979087 kubelet[2897]: I0113 23:48:49.979060 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5-socket-dir\") pod \"csi-node-driver-8bgtx\" (UID: \"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5\") " pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:48:49.979322 kubelet[2897]: E0113 23:48:49.979273 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.979360 kubelet[2897]: W0113 23:48:49.979316 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.979360 kubelet[2897]: E0113 23:48:49.979356 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.979558 kubelet[2897]: E0113 23:48:49.979547 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.979558 kubelet[2897]: W0113 23:48:49.979558 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.979637 kubelet[2897]: E0113 23:48:49.979571 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.979726 kubelet[2897]: E0113 23:48:49.979714 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.979862 kubelet[2897]: W0113 23:48:49.979733 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.979862 kubelet[2897]: E0113 23:48:49.979748 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.979862 kubelet[2897]: I0113 23:48:49.979768 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brggj\" (UniqueName: \"kubernetes.io/projected/9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5-kube-api-access-brggj\") pod \"csi-node-driver-8bgtx\" (UID: \"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5\") " pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:48:49.980399 kubelet[2897]: E0113 23:48:49.980370 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.980506 kubelet[2897]: W0113 23:48:49.980491 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.980571 kubelet[2897]: E0113 23:48:49.980558 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.980821 kubelet[2897]: E0113 23:48:49.980808 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.980884 kubelet[2897]: W0113 23:48:49.980873 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.980999 kubelet[2897]: E0113 23:48:49.980985 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.981265 kubelet[2897]: E0113 23:48:49.981245 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.981265 kubelet[2897]: W0113 23:48:49.981259 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.981378 kubelet[2897]: E0113 23:48:49.981292 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.981558 kubelet[2897]: E0113 23:48:49.981462 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.981558 kubelet[2897]: W0113 23:48:49.981470 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.981558 kubelet[2897]: E0113 23:48:49.981499 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.981685 kubelet[2897]: E0113 23:48:49.981621 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.981685 kubelet[2897]: W0113 23:48:49.981630 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.981685 kubelet[2897]: E0113 23:48:49.981640 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.981685 kubelet[2897]: I0113 23:48:49.981665 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5-kubelet-dir\") pod \"csi-node-driver-8bgtx\" (UID: \"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5\") " pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:48:49.981885 kubelet[2897]: E0113 23:48:49.981869 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.981885 kubelet[2897]: W0113 23:48:49.981883 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.981953 kubelet[2897]: E0113 23:48:49.981898 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.981953 kubelet[2897]: I0113 23:48:49.981923 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5-varrun\") pod \"csi-node-driver-8bgtx\" (UID: \"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5\") " pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:48:49.982113 kubelet[2897]: E0113 23:48:49.982099 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.982162 kubelet[2897]: W0113 23:48:49.982113 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.982162 kubelet[2897]: E0113 23:48:49.982133 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.982281 kubelet[2897]: E0113 23:48:49.982270 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.982281 kubelet[2897]: W0113 23:48:49.982280 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.982281 kubelet[2897]: E0113 23:48:49.982293 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.982454 kubelet[2897]: E0113 23:48:49.982441 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.982486 kubelet[2897]: W0113 23:48:49.982452 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.982486 kubelet[2897]: E0113 23:48:49.982471 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:49.982644 kubelet[2897]: E0113 23:48:49.982633 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:49.982644 kubelet[2897]: W0113 23:48:49.982643 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:49.982743 kubelet[2897]: E0113 23:48:49.982650 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.021903 containerd[1676]: time="2026-01-13T23:48:50.021865209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wwq4,Uid:508bd326-ca54-4936-834a-a9729deed647,Namespace:calico-system,Attempt:0,}" Jan 13 23:48:50.046845 containerd[1676]: time="2026-01-13T23:48:50.046730974Z" level=info msg="connecting to shim a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f" address="unix:///run/containerd/s/30133a9a7c87b1636240b25ccb840d163e6da575b081d6cbba81626518aecab8" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:48:50.069688 systemd[1]: Started cri-containerd-a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f.scope - libcontainer container a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f. Jan 13 23:48:50.082000 audit: BPF prog-id=156 op=LOAD Jan 13 23:48:50.083334 kubelet[2897]: E0113 23:48:50.083311 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.083334 kubelet[2897]: W0113 23:48:50.083328 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.083418 kubelet[2897]: E0113 23:48:50.083348 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.083602 kubelet[2897]: E0113 23:48:50.083548 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.083602 kubelet[2897]: W0113 23:48:50.083559 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.083602 kubelet[2897]: E0113 23:48:50.083573 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.083865 kubelet[2897]: E0113 23:48:50.083845 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.083865 kubelet[2897]: W0113 23:48:50.083864 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.083929 kubelet[2897]: E0113 23:48:50.083883 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.083000 audit: BPF prog-id=157 op=LOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.083000 audit: BPF prog-id=157 op=UNLOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.084372 kubelet[2897]: E0113 23:48:50.084239 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.084372 kubelet[2897]: W0113 23:48:50.084250 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.084372 kubelet[2897]: E0113 23:48:50.084268 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.084437 kubelet[2897]: E0113 23:48:50.084426 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.084462 kubelet[2897]: W0113 23:48:50.084438 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.084462 kubelet[2897]: E0113 23:48:50.084456 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.083000 audit: BPF prog-id=158 op=LOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.083000 audit: BPF prog-id=159 op=LOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.083000 audit: BPF prog-id=159 op=UNLOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.083000 audit: BPF prog-id=158 op=UNLOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.083000 audit: BPF prog-id=160 op=LOAD Jan 13 23:48:50.083000 audit[3448]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3437 pid=3448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132633964343862393366656235343562346432663532363065383034 Jan 13 23:48:50.085080 kubelet[2897]: E0113 23:48:50.085029 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.085080 kubelet[2897]: W0113 23:48:50.085040 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.085080 kubelet[2897]: E0113 23:48:50.085060 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.085327 kubelet[2897]: E0113 23:48:50.085306 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.085327 kubelet[2897]: W0113 23:48:50.085324 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.085369 kubelet[2897]: E0113 23:48:50.085341 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.085534 kubelet[2897]: E0113 23:48:50.085519 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.085534 kubelet[2897]: W0113 23:48:50.085533 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.085581 kubelet[2897]: E0113 23:48:50.085560 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.086669 kubelet[2897]: E0113 23:48:50.086371 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.086760 kubelet[2897]: W0113 23:48:50.086672 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.086760 kubelet[2897]: E0113 23:48:50.086732 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.087551 kubelet[2897]: E0113 23:48:50.087529 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.087551 kubelet[2897]: W0113 23:48:50.087545 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.087643 kubelet[2897]: E0113 23:48:50.087595 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.088757 kubelet[2897]: E0113 23:48:50.088400 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.088757 kubelet[2897]: W0113 23:48:50.088422 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.088757 kubelet[2897]: E0113 23:48:50.088475 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.088866 kubelet[2897]: E0113 23:48:50.088794 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.088866 kubelet[2897]: W0113 23:48:50.088808 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.088866 kubelet[2897]: E0113 23:48:50.088851 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.089136 kubelet[2897]: E0113 23:48:50.089100 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.089136 kubelet[2897]: W0113 23:48:50.089116 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.090016 kubelet[2897]: E0113 23:48:50.089208 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.090016 kubelet[2897]: E0113 23:48:50.089322 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.090016 kubelet[2897]: W0113 23:48:50.089332 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.090016 kubelet[2897]: E0113 23:48:50.089357 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.090016 kubelet[2897]: E0113 23:48:50.089705 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.090016 kubelet[2897]: W0113 23:48:50.089718 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.090016 kubelet[2897]: E0113 23:48:50.089740 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.090568 kubelet[2897]: E0113 23:48:50.090545 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.090598 kubelet[2897]: W0113 23:48:50.090568 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.090598 kubelet[2897]: E0113 23:48:50.090590 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.091062 kubelet[2897]: E0113 23:48:50.091041 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.091062 kubelet[2897]: W0113 23:48:50.091060 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.091128 kubelet[2897]: E0113 23:48:50.091099 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.091587 kubelet[2897]: E0113 23:48:50.091567 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.091587 kubelet[2897]: W0113 23:48:50.091583 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.091723 kubelet[2897]: E0113 23:48:50.091706 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.092061 kubelet[2897]: E0113 23:48:50.092041 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.092108 kubelet[2897]: W0113 23:48:50.092062 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.092108 kubelet[2897]: E0113 23:48:50.092091 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.092525 kubelet[2897]: E0113 23:48:50.092505 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.092525 kubelet[2897]: W0113 23:48:50.092524 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.092583 kubelet[2897]: E0113 23:48:50.092552 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.092951 kubelet[2897]: E0113 23:48:50.092901 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.092951 kubelet[2897]: W0113 23:48:50.092947 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.093055 kubelet[2897]: E0113 23:48:50.093026 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.093176 kubelet[2897]: E0113 23:48:50.093159 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.093176 kubelet[2897]: W0113 23:48:50.093174 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.093320 kubelet[2897]: E0113 23:48:50.093251 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.093455 kubelet[2897]: E0113 23:48:50.093438 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.093519 kubelet[2897]: W0113 23:48:50.093453 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.093519 kubelet[2897]: E0113 23:48:50.093493 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.093715 kubelet[2897]: E0113 23:48:50.093699 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.093763 kubelet[2897]: W0113 23:48:50.093718 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.093763 kubelet[2897]: E0113 23:48:50.093736 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.094002 kubelet[2897]: E0113 23:48:50.093983 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.094002 kubelet[2897]: W0113 23:48:50.093999 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.094075 kubelet[2897]: E0113 23:48:50.094009 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.108227 kubelet[2897]: E0113 23:48:50.108191 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:50.108227 kubelet[2897]: W0113 23:48:50.108214 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:50.108227 kubelet[2897]: E0113 23:48:50.108234 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:50.118930 containerd[1676]: time="2026-01-13T23:48:50.118876094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7wwq4,Uid:508bd326-ca54-4936-834a-a9729deed647,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\"" Jan 13 23:48:50.519000 audit[3502]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:50.519000 audit[3502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd4f04ed0 a2=0 a3=1 items=0 ppid=3065 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:50.526000 audit[3502]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:48:50.526000 audit[3502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd4f04ed0 a2=0 a3=1 items=0 ppid=3065 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:50.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:48:51.258710 kubelet[2897]: E0113 23:48:51.258641 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:51.259987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055842460.mount: Deactivated successfully. Jan 13 23:48:51.584897 containerd[1676]: time="2026-01-13T23:48:51.584654872Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:51.586608 containerd[1676]: time="2026-01-13T23:48:51.586567561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 13 23:48:51.587465 containerd[1676]: time="2026-01-13T23:48:51.587439406Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:51.591988 containerd[1676]: time="2026-01-13T23:48:51.591410065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:51.592113 containerd[1676]: time="2026-01-13T23:48:51.592092348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.653159313s" Jan 13 23:48:51.592177 containerd[1676]: time="2026-01-13T23:48:51.592164829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 13 23:48:51.594102 containerd[1676]: time="2026-01-13T23:48:51.594075318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 13 23:48:51.602992 containerd[1676]: time="2026-01-13T23:48:51.602926642Z" level=info msg="CreateContainer within sandbox \"e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 13 23:48:51.612273 containerd[1676]: time="2026-01-13T23:48:51.612241687Z" level=info msg="Container 8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:51.623578 containerd[1676]: time="2026-01-13T23:48:51.623443342Z" level=info msg="CreateContainer within sandbox \"e610e0867af4789de739dbf16044de842e91e6ddf8318e74784c9d353f15361a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9\"" Jan 13 23:48:51.623952 containerd[1676]: time="2026-01-13T23:48:51.623928505Z" level=info msg="StartContainer for \"8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9\"" Jan 13 23:48:51.625230 containerd[1676]: time="2026-01-13T23:48:51.625165711Z" level=info msg="connecting to shim 8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9" address="unix:///run/containerd/s/13d8a80e52c1779ac4ce3aba1a121230dddd3bfb19081883be9a29934d371a03" protocol=ttrpc version=3 Jan 13 23:48:51.648177 systemd[1]: Started cri-containerd-8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9.scope - libcontainer container 8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9. Jan 13 23:48:51.657000 audit: BPF prog-id=161 op=LOAD Jan 13 23:48:51.657000 audit: BPF prog-id=162 op=LOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=162 op=UNLOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=163 op=LOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=164 op=LOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=164 op=UNLOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=163 op=UNLOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.657000 audit: BPF prog-id=165 op=LOAD Jan 13 23:48:51.657000 audit[3513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3336 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:51.657000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3861396632613463396231303038366361356636303730313265613263 Jan 13 23:48:51.687303 containerd[1676]: time="2026-01-13T23:48:51.687230256Z" level=info msg="StartContainer for \"8a9f2a4c9b10086ca5f607012ea2ce1beda55a1a646eb4ad39cd9d6249ba83d9\" returns successfully" Jan 13 23:48:52.384375 kubelet[2897]: E0113 23:48:52.384342 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.384852 kubelet[2897]: W0113 23:48:52.384524 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.384852 kubelet[2897]: E0113 23:48:52.384550 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.385162 kubelet[2897]: E0113 23:48:52.385147 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.385267 kubelet[2897]: W0113 23:48:52.385221 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.385422 kubelet[2897]: E0113 23:48:52.385306 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.385616 kubelet[2897]: E0113 23:48:52.385576 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.385616 kubelet[2897]: W0113 23:48:52.385591 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.385616 kubelet[2897]: E0113 23:48:52.385601 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.385971 kubelet[2897]: E0113 23:48:52.385948 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.386116 kubelet[2897]: W0113 23:48:52.386040 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.386116 kubelet[2897]: E0113 23:48:52.386056 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.386555 kubelet[2897]: E0113 23:48:52.386482 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.386555 kubelet[2897]: W0113 23:48:52.386496 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.386555 kubelet[2897]: E0113 23:48:52.386506 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.386795 kubelet[2897]: E0113 23:48:52.386782 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.387328 kubelet[2897]: W0113 23:48:52.386818 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.387328 kubelet[2897]: E0113 23:48:52.386830 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.387584 kubelet[2897]: E0113 23:48:52.387469 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.387683 kubelet[2897]: W0113 23:48:52.387666 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.387755 kubelet[2897]: E0113 23:48:52.387743 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.388014 kubelet[2897]: E0113 23:48:52.388000 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.388187 kubelet[2897]: W0113 23:48:52.388068 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.388187 kubelet[2897]: E0113 23:48:52.388085 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.388331 kubelet[2897]: E0113 23:48:52.388319 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.388380 kubelet[2897]: W0113 23:48:52.388371 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.388426 kubelet[2897]: E0113 23:48:52.388417 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.388700 kubelet[2897]: E0113 23:48:52.388612 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.388700 kubelet[2897]: W0113 23:48:52.388623 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.388700 kubelet[2897]: E0113 23:48:52.388633 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.388857 kubelet[2897]: E0113 23:48:52.388845 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.388928 kubelet[2897]: W0113 23:48:52.388914 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.389100 kubelet[2897]: E0113 23:48:52.389005 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.389200 kubelet[2897]: E0113 23:48:52.389188 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.389251 kubelet[2897]: W0113 23:48:52.389241 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.389315 kubelet[2897]: E0113 23:48:52.389304 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.389595 kubelet[2897]: E0113 23:48:52.389503 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.389595 kubelet[2897]: W0113 23:48:52.389515 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.389595 kubelet[2897]: E0113 23:48:52.389525 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.389748 kubelet[2897]: E0113 23:48:52.389736 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.389807 kubelet[2897]: W0113 23:48:52.389796 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.389861 kubelet[2897]: E0113 23:48:52.389850 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.390089 kubelet[2897]: E0113 23:48:52.390076 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.390152 kubelet[2897]: W0113 23:48:52.390142 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.390198 kubelet[2897]: E0113 23:48:52.390189 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.404806 kubelet[2897]: E0113 23:48:52.404769 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.404806 kubelet[2897]: W0113 23:48:52.404807 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405028 kubelet[2897]: E0113 23:48:52.404825 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.405110 kubelet[2897]: E0113 23:48:52.405096 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.405142 kubelet[2897]: W0113 23:48:52.405110 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405142 kubelet[2897]: E0113 23:48:52.405126 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.405338 kubelet[2897]: E0113 23:48:52.405322 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.405338 kubelet[2897]: W0113 23:48:52.405335 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405407 kubelet[2897]: E0113 23:48:52.405350 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.405576 kubelet[2897]: E0113 23:48:52.405539 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.405576 kubelet[2897]: W0113 23:48:52.405552 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405576 kubelet[2897]: E0113 23:48:52.405566 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.405717 kubelet[2897]: E0113 23:48:52.405705 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.405717 kubelet[2897]: W0113 23:48:52.405715 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405854 kubelet[2897]: E0113 23:48:52.405728 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.405897 kubelet[2897]: E0113 23:48:52.405861 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.405897 kubelet[2897]: W0113 23:48:52.405868 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.405897 kubelet[2897]: E0113 23:48:52.405880 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.406053 kubelet[2897]: E0113 23:48:52.406042 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.406053 kubelet[2897]: W0113 23:48:52.406052 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.406118 kubelet[2897]: E0113 23:48:52.406066 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.406582 kubelet[2897]: E0113 23:48:52.406562 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.406582 kubelet[2897]: W0113 23:48:52.406582 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.406772 kubelet[2897]: E0113 23:48:52.406620 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.407182 kubelet[2897]: E0113 23:48:52.407108 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.407182 kubelet[2897]: W0113 23:48:52.407144 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.407485 kubelet[2897]: E0113 23:48:52.407439 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.407857 kubelet[2897]: E0113 23:48:52.407841 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.407953 kubelet[2897]: W0113 23:48:52.407940 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.408194 kubelet[2897]: E0113 23:48:52.408162 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.408417 kubelet[2897]: E0113 23:48:52.408389 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.408417 kubelet[2897]: W0113 23:48:52.408402 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.409239 kubelet[2897]: E0113 23:48:52.409113 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.409490 kubelet[2897]: E0113 23:48:52.409471 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.409587 kubelet[2897]: W0113 23:48:52.409571 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.409680 kubelet[2897]: E0113 23:48:52.409669 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.409911 kubelet[2897]: E0113 23:48:52.409894 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.409953 kubelet[2897]: W0113 23:48:52.409912 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.409953 kubelet[2897]: E0113 23:48:52.409930 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.410523 kubelet[2897]: E0113 23:48:52.410500 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.410523 kubelet[2897]: W0113 23:48:52.410517 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.410594 kubelet[2897]: E0113 23:48:52.410535 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.410771 kubelet[2897]: E0113 23:48:52.410751 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.410771 kubelet[2897]: W0113 23:48:52.410764 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.410829 kubelet[2897]: E0113 23:48:52.410780 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.411188 kubelet[2897]: E0113 23:48:52.411151 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.411422 kubelet[2897]: W0113 23:48:52.411253 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.411422 kubelet[2897]: E0113 23:48:52.411309 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.411596 kubelet[2897]: E0113 23:48:52.411572 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.411596 kubelet[2897]: W0113 23:48:52.411588 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.411656 kubelet[2897]: E0113 23:48:52.411603 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.412033 kubelet[2897]: E0113 23:48:52.411979 2897 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 13 23:48:52.412033 kubelet[2897]: W0113 23:48:52.411993 2897 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 13 23:48:52.412033 kubelet[2897]: E0113 23:48:52.412005 2897 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 13 23:48:52.832613 containerd[1676]: time="2026-01-13T23:48:52.832554558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:52.833599 containerd[1676]: time="2026-01-13T23:48:52.833484243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 13 23:48:52.834686 containerd[1676]: time="2026-01-13T23:48:52.834616168Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:52.838260 containerd[1676]: time="2026-01-13T23:48:52.838225346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:52.838992 containerd[1676]: time="2026-01-13T23:48:52.838921110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.24467619s" Jan 13 23:48:52.839161 containerd[1676]: time="2026-01-13T23:48:52.838954270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 13 23:48:52.842315 containerd[1676]: time="2026-01-13T23:48:52.842275646Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 13 23:48:52.854998 containerd[1676]: time="2026-01-13T23:48:52.854090984Z" level=info msg="Container a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:52.862741 containerd[1676]: time="2026-01-13T23:48:52.862692666Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84\"" Jan 13 23:48:52.863829 containerd[1676]: time="2026-01-13T23:48:52.863792072Z" level=info msg="StartContainer for \"a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84\"" Jan 13 23:48:52.865971 containerd[1676]: time="2026-01-13T23:48:52.865742081Z" level=info msg="connecting to shim a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84" address="unix:///run/containerd/s/30133a9a7c87b1636240b25ccb840d163e6da575b081d6cbba81626518aecab8" protocol=ttrpc version=3 Jan 13 23:48:52.886137 systemd[1]: Started cri-containerd-a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84.scope - libcontainer container a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84. Jan 13 23:48:52.938000 audit: BPF prog-id=166 op=LOAD Jan 13 23:48:52.940992 kernel: kauditd_printk_skb: 86 callbacks suppressed Jan 13 23:48:52.941030 kernel: audit: type=1334 audit(1768348132.938:564): prog-id=166 op=LOAD Jan 13 23:48:52.938000 audit[3590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.944949 kernel: audit: type=1300 audit(1768348132.938:564): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.945039 kernel: audit: type=1327 audit(1768348132.938:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.938000 audit: BPF prog-id=167 op=LOAD Jan 13 23:48:52.938000 audit[3590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.951923 kernel: audit: type=1334 audit(1768348132.938:565): prog-id=167 op=LOAD Jan 13 23:48:52.952071 kernel: audit: type=1300 audit(1768348132.938:565): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.952101 kernel: audit: type=1327 audit(1768348132.938:565): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.939000 audit: BPF prog-id=167 op=UNLOAD Jan 13 23:48:52.939000 audit[3590]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.958706 kernel: audit: type=1334 audit(1768348132.939:566): prog-id=167 op=UNLOAD Jan 13 23:48:52.958820 kernel: audit: type=1300 audit(1768348132.939:566): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.958840 kernel: audit: type=1327 audit(1768348132.939:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.939000 audit: BPF prog-id=166 op=UNLOAD Jan 13 23:48:52.939000 audit[3590]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.962980 kernel: audit: type=1334 audit(1768348132.939:567): prog-id=166 op=UNLOAD Jan 13 23:48:52.939000 audit: BPF prog-id=168 op=LOAD Jan 13 23:48:52.939000 audit[3590]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3437 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:52.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6134643439613261346639633135636131646263336335626330666330 Jan 13 23:48:52.982178 containerd[1676]: time="2026-01-13T23:48:52.982131733Z" level=info msg="StartContainer for \"a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84\" returns successfully" Jan 13 23:48:52.997100 systemd[1]: cri-containerd-a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84.scope: Deactivated successfully. Jan 13 23:48:52.998949 containerd[1676]: time="2026-01-13T23:48:52.998813574Z" level=info msg="received container exit event container_id:\"a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84\" id:\"a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84\" pid:3603 exited_at:{seconds:1768348132 nanos:998323052}" Jan 13 23:48:53.002000 audit: BPF prog-id=168 op=UNLOAD Jan 13 23:48:53.021219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4d49a2a4f9c15ca1dbc3c5bc0fc00c06e57707fd6f2c045d49c89da1095bd84-rootfs.mount: Deactivated successfully. Jan 13 23:48:53.258010 kubelet[2897]: E0113 23:48:53.257914 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:53.330386 kubelet[2897]: I0113 23:48:53.330349 2897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 23:48:53.345992 kubelet[2897]: I0113 23:48:53.345897 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b6c97f4cc-vj9rv" podStartSLOduration=2.691213037 podStartE2EDuration="4.345878518s" podCreationTimestamp="2026-01-13 23:48:49 +0000 UTC" firstStartedPulling="2026-01-13 23:48:49.938526113 +0000 UTC m=+22.766868325" lastFinishedPulling="2026-01-13 23:48:51.593191594 +0000 UTC m=+24.421533806" observedRunningTime="2026-01-13 23:48:52.340683864 +0000 UTC m=+25.169026076" watchObservedRunningTime="2026-01-13 23:48:53.345878518 +0000 UTC m=+26.174220730" Jan 13 23:48:55.257659 kubelet[2897]: E0113 23:48:55.257612 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:57.257812 kubelet[2897]: E0113 23:48:57.257753 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:57.339172 containerd[1676]: time="2026-01-13T23:48:57.339094731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 13 23:48:59.258430 kubelet[2897]: E0113 23:48:59.258352 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:48:59.519120 containerd[1676]: time="2026-01-13T23:48:59.518972652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:59.522219 containerd[1676]: time="2026-01-13T23:48:59.522151627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 13 23:48:59.523342 containerd[1676]: time="2026-01-13T23:48:59.523306113Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:59.525807 containerd[1676]: time="2026-01-13T23:48:59.525753365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:48:59.526728 containerd[1676]: time="2026-01-13T23:48:59.526684049Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.187549278s" Jan 13 23:48:59.526728 containerd[1676]: time="2026-01-13T23:48:59.526725170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 13 23:48:59.530008 containerd[1676]: time="2026-01-13T23:48:59.529975466Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 23:48:59.540796 containerd[1676]: time="2026-01-13T23:48:59.540223596Z" level=info msg="Container 1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:48:59.543563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount580040425.mount: Deactivated successfully. Jan 13 23:48:59.552204 containerd[1676]: time="2026-01-13T23:48:59.552134055Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782\"" Jan 13 23:48:59.552805 containerd[1676]: time="2026-01-13T23:48:59.552774458Z" level=info msg="StartContainer for \"1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782\"" Jan 13 23:48:59.557331 containerd[1676]: time="2026-01-13T23:48:59.557235760Z" level=info msg="connecting to shim 1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782" address="unix:///run/containerd/s/30133a9a7c87b1636240b25ccb840d163e6da575b081d6cbba81626518aecab8" protocol=ttrpc version=3 Jan 13 23:48:59.580179 systemd[1]: Started cri-containerd-1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782.scope - libcontainer container 1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782. Jan 13 23:48:59.642000 audit: BPF prog-id=169 op=LOAD Jan 13 23:48:59.644027 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 13 23:48:59.644079 kernel: audit: type=1334 audit(1768348139.642:570): prog-id=169 op=LOAD Jan 13 23:48:59.642000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.647600 kernel: audit: type=1300 audit(1768348139.642:570): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.647709 kernel: audit: type=1327 audit(1768348139.642:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.642000 audit: BPF prog-id=170 op=LOAD Jan 13 23:48:59.651072 kernel: audit: type=1334 audit(1768348139.642:571): prog-id=170 op=LOAD Jan 13 23:48:59.642000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.654096 kernel: audit: type=1300 audit(1768348139.642:571): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.642000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.657740 kernel: audit: type=1327 audit(1768348139.642:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.657829 kernel: audit: type=1334 audit(1768348139.643:572): prog-id=170 op=UNLOAD Jan 13 23:48:59.643000 audit: BPF prog-id=170 op=UNLOAD Jan 13 23:48:59.643000 audit[3651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.661373 kernel: audit: type=1300 audit(1768348139.643:572): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.661438 kernel: audit: type=1327 audit(1768348139.643:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.643000 audit: BPF prog-id=169 op=UNLOAD Jan 13 23:48:59.643000 audit[3651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.664981 kernel: audit: type=1334 audit(1768348139.643:573): prog-id=169 op=UNLOAD Jan 13 23:48:59.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.643000 audit: BPF prog-id=171 op=LOAD Jan 13 23:48:59.643000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3437 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:48:59.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3165646236656661383733353033303764383631353631393835333932 Jan 13 23:48:59.672922 containerd[1676]: time="2026-01-13T23:48:59.672859408Z" level=info msg="StartContainer for \"1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782\" returns successfully" Jan 13 23:49:00.932253 containerd[1676]: time="2026-01-13T23:49:00.932205082Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 23:49:00.934067 systemd[1]: cri-containerd-1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782.scope: Deactivated successfully. Jan 13 23:49:00.934608 systemd[1]: cri-containerd-1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782.scope: Consumed 521ms CPU time, 192.8M memory peak, 165.9M written to disk. Jan 13 23:49:00.935495 containerd[1676]: time="2026-01-13T23:49:00.935455778Z" level=info msg="received container exit event container_id:\"1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782\" id:\"1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782\" pid:3664 exited_at:{seconds:1768348140 nanos:935273777}" Jan 13 23:49:00.941000 audit: BPF prog-id=171 op=UNLOAD Jan 13 23:49:00.954746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1edb6efa87350307d861561985392a08326a6fa9fbb62047e49d5bd0df320782-rootfs.mount: Deactivated successfully. Jan 13 23:49:01.004707 kubelet[2897]: I0113 23:49:01.004586 2897 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 13 23:49:01.072516 kubelet[2897]: I0113 23:49:01.066647 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8svz\" (UniqueName: \"kubernetes.io/projected/4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f-kube-api-access-h8svz\") pod \"calico-apiserver-5ddfc947b7-h454c\" (UID: \"4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f\") " pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" Jan 13 23:49:01.072516 kubelet[2897]: I0113 23:49:01.066683 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1568cf69-d227-4d6c-8d10-61ba58db7902-config\") pod \"goldmane-666569f655-kd8rx\" (UID: \"1568cf69-d227-4d6c-8d10-61ba58db7902\") " pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:01.072516 kubelet[2897]: I0113 23:49:01.066701 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1568cf69-d227-4d6c-8d10-61ba58db7902-goldmane-ca-bundle\") pod \"goldmane-666569f655-kd8rx\" (UID: \"1568cf69-d227-4d6c-8d10-61ba58db7902\") " pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:01.072516 kubelet[2897]: I0113 23:49:01.066720 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85ncl\" (UniqueName: \"kubernetes.io/projected/1568cf69-d227-4d6c-8d10-61ba58db7902-kube-api-access-85ncl\") pod \"goldmane-666569f655-kd8rx\" (UID: \"1568cf69-d227-4d6c-8d10-61ba58db7902\") " pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:01.072516 kubelet[2897]: I0113 23:49:01.066738 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljfcm\" (UniqueName: \"kubernetes.io/projected/a8c70103-7cce-4b73-89d9-3436fcb8e708-kube-api-access-ljfcm\") pod \"coredns-668d6bf9bc-n95xs\" (UID: \"a8c70103-7cce-4b73-89d9-3436fcb8e708\") " pod="kube-system/coredns-668d6bf9bc-n95xs" Jan 13 23:49:01.051464 systemd[1]: Created slice kubepods-besteffort-podfeb12ef9_da4b_41c3_8609_097b9429383b.slice - libcontainer container kubepods-besteffort-podfeb12ef9_da4b_41c3_8609_097b9429383b.slice. Jan 13 23:49:01.072774 kubelet[2897]: I0113 23:49:01.066755 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-ca-bundle\") pod \"whisker-6d5c6fcc78-zcpg2\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " pod="calico-system/whisker-6d5c6fcc78-zcpg2" Jan 13 23:49:01.072774 kubelet[2897]: I0113 23:49:01.066774 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f-calico-apiserver-certs\") pod \"calico-apiserver-5ddfc947b7-h454c\" (UID: \"4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f\") " pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" Jan 13 23:49:01.072774 kubelet[2897]: I0113 23:49:01.066791 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzspl\" (UniqueName: \"kubernetes.io/projected/feb12ef9-da4b-41c3-8609-097b9429383b-kube-api-access-hzspl\") pod \"calico-apiserver-5ddfc947b7-qj4hz\" (UID: \"feb12ef9-da4b-41c3-8609-097b9429383b\") " pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" Jan 13 23:49:01.072774 kubelet[2897]: I0113 23:49:01.066811 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-backend-key-pair\") pod \"whisker-6d5c6fcc78-zcpg2\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " pod="calico-system/whisker-6d5c6fcc78-zcpg2" Jan 13 23:49:01.072774 kubelet[2897]: I0113 23:49:01.066835 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlgx2\" (UniqueName: \"kubernetes.io/projected/afe7c376-9383-4b14-9626-1a508d91c894-kube-api-access-vlgx2\") pod \"coredns-668d6bf9bc-l7njz\" (UID: \"afe7c376-9383-4b14-9626-1a508d91c894\") " pod="kube-system/coredns-668d6bf9bc-l7njz" Jan 13 23:49:01.060762 systemd[1]: Created slice kubepods-burstable-podafe7c376_9383_4b14_9626_1a508d91c894.slice - libcontainer container kubepods-burstable-podafe7c376_9383_4b14_9626_1a508d91c894.slice. Jan 13 23:49:01.072930 kubelet[2897]: I0113 23:49:01.066850 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8c70103-7cce-4b73-89d9-3436fcb8e708-config-volume\") pod \"coredns-668d6bf9bc-n95xs\" (UID: \"a8c70103-7cce-4b73-89d9-3436fcb8e708\") " pod="kube-system/coredns-668d6bf9bc-n95xs" Jan 13 23:49:01.072930 kubelet[2897]: I0113 23:49:01.066867 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705f5b22-6117-46b0-94d6-546618492a26-tigera-ca-bundle\") pod \"calico-kube-controllers-65d475c445-nfbnr\" (UID: \"705f5b22-6117-46b0-94d6-546618492a26\") " pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" Jan 13 23:49:01.072930 kubelet[2897]: I0113 23:49:01.066882 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5nz8\" (UniqueName: \"kubernetes.io/projected/705f5b22-6117-46b0-94d6-546618492a26-kube-api-access-d5nz8\") pod \"calico-kube-controllers-65d475c445-nfbnr\" (UID: \"705f5b22-6117-46b0-94d6-546618492a26\") " pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" Jan 13 23:49:01.072930 kubelet[2897]: I0113 23:49:01.066900 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw6v6\" (UniqueName: \"kubernetes.io/projected/ebdd74f1-2523-4586-bc94-5eec26645924-kube-api-access-vw6v6\") pod \"whisker-6d5c6fcc78-zcpg2\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " pod="calico-system/whisker-6d5c6fcc78-zcpg2" Jan 13 23:49:01.072930 kubelet[2897]: I0113 23:49:01.066920 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/afe7c376-9383-4b14-9626-1a508d91c894-config-volume\") pod \"coredns-668d6bf9bc-l7njz\" (UID: \"afe7c376-9383-4b14-9626-1a508d91c894\") " pod="kube-system/coredns-668d6bf9bc-l7njz" Jan 13 23:49:01.070026 systemd[1]: Created slice kubepods-burstable-poda8c70103_7cce_4b73_89d9_3436fcb8e708.slice - libcontainer container kubepods-burstable-poda8c70103_7cce_4b73_89d9_3436fcb8e708.slice. Jan 13 23:49:01.073105 kubelet[2897]: I0113 23:49:01.066934 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/feb12ef9-da4b-41c3-8609-097b9429383b-calico-apiserver-certs\") pod \"calico-apiserver-5ddfc947b7-qj4hz\" (UID: \"feb12ef9-da4b-41c3-8609-097b9429383b\") " pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" Jan 13 23:49:01.073105 kubelet[2897]: I0113 23:49:01.066950 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1568cf69-d227-4d6c-8d10-61ba58db7902-goldmane-key-pair\") pod \"goldmane-666569f655-kd8rx\" (UID: \"1568cf69-d227-4d6c-8d10-61ba58db7902\") " pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:01.078364 systemd[1]: Created slice kubepods-besteffort-pod4f1e13c3_2a68_4c84_a7e2_50f4d7a91f6f.slice - libcontainer container kubepods-besteffort-pod4f1e13c3_2a68_4c84_a7e2_50f4d7a91f6f.slice. Jan 13 23:49:01.085825 systemd[1]: Created slice kubepods-besteffort-pod705f5b22_6117_46b0_94d6_546618492a26.slice - libcontainer container kubepods-besteffort-pod705f5b22_6117_46b0_94d6_546618492a26.slice. Jan 13 23:49:01.091700 systemd[1]: Created slice kubepods-besteffort-podebdd74f1_2523_4586_bc94_5eec26645924.slice - libcontainer container kubepods-besteffort-podebdd74f1_2523_4586_bc94_5eec26645924.slice. Jan 13 23:49:01.095878 systemd[1]: Created slice kubepods-besteffort-pod1568cf69_d227_4d6c_8d10_61ba58db7902.slice - libcontainer container kubepods-besteffort-pod1568cf69_d227_4d6c_8d10_61ba58db7902.slice. Jan 13 23:49:01.263419 systemd[1]: Created slice kubepods-besteffort-pod9767bcb1_e61e_4a0b_9b29_2c0eaa4146c5.slice - libcontainer container kubepods-besteffort-pod9767bcb1_e61e_4a0b_9b29_2c0eaa4146c5.slice. Jan 13 23:49:01.353068 containerd[1676]: time="2026-01-13T23:49:01.353012031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8bgtx,Uid:9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:01.356711 containerd[1676]: time="2026-01-13T23:49:01.356663769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-qj4hz,Uid:feb12ef9-da4b-41c3-8609-097b9429383b,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:49:01.365683 containerd[1676]: time="2026-01-13T23:49:01.365606573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7njz,Uid:afe7c376-9383-4b14-9626-1a508d91c894,Namespace:kube-system,Attempt:0,}" Jan 13 23:49:01.373711 containerd[1676]: time="2026-01-13T23:49:01.373669413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n95xs,Uid:a8c70103-7cce-4b73-89d9-3436fcb8e708,Namespace:kube-system,Attempt:0,}" Jan 13 23:49:01.382561 containerd[1676]: time="2026-01-13T23:49:01.382501896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-h454c,Uid:4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:49:01.389467 containerd[1676]: time="2026-01-13T23:49:01.389403370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d475c445-nfbnr,Uid:705f5b22-6117-46b0-94d6-546618492a26,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:01.395614 containerd[1676]: time="2026-01-13T23:49:01.395543481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5c6fcc78-zcpg2,Uid:ebdd74f1-2523-4586-bc94-5eec26645924,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:01.398363 containerd[1676]: time="2026-01-13T23:49:01.398282734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kd8rx,Uid:1568cf69-d227-4d6c-8d10-61ba58db7902,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:02.559104 containerd[1676]: time="2026-01-13T23:49:02.558936922Z" level=error msg="Failed to destroy network for sandbox \"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.562177 containerd[1676]: time="2026-01-13T23:49:02.562084978Z" level=error msg="Failed to destroy network for sandbox \"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.562332 containerd[1676]: time="2026-01-13T23:49:02.562305179Z" level=error msg="Failed to destroy network for sandbox \"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.562726 containerd[1676]: time="2026-01-13T23:49:02.562677221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8bgtx,Uid:9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.563979 kubelet[2897]: E0113 23:49:02.563016 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.563979 kubelet[2897]: E0113 23:49:02.563512 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:49:02.563979 kubelet[2897]: E0113 23:49:02.563537 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8bgtx" Jan 13 23:49:02.564872 kubelet[2897]: E0113 23:49:02.563589 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f362ae058eb39b8433d008fa40cff6328423f1c4d2d165aedc75bd281488a423\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:02.566621 containerd[1676]: time="2026-01-13T23:49:02.566509759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kd8rx,Uid:1568cf69-d227-4d6c-8d10-61ba58db7902,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.566968 kubelet[2897]: E0113 23:49:02.566927 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.567098 kubelet[2897]: E0113 23:49:02.567076 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:02.567583 kubelet[2897]: E0113 23:49:02.567155 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-kd8rx" Jan 13 23:49:02.567583 kubelet[2897]: E0113 23:49:02.567495 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16b115ea097bbe31cf15ef244018d17476fd165555835b245743d59411777114\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:02.570147 containerd[1676]: time="2026-01-13T23:49:02.570108417Z" level=error msg="Failed to destroy network for sandbox \"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.572460 containerd[1676]: time="2026-01-13T23:49:02.572110147Z" level=error msg="Failed to destroy network for sandbox \"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.572460 containerd[1676]: time="2026-01-13T23:49:02.572235868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7njz,Uid:afe7c376-9383-4b14-9626-1a508d91c894,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.572615 kubelet[2897]: E0113 23:49:02.572470 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.572615 kubelet[2897]: E0113 23:49:02.572525 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l7njz" Jan 13 23:49:02.572615 kubelet[2897]: E0113 23:49:02.572547 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-l7njz" Jan 13 23:49:02.572705 kubelet[2897]: E0113 23:49:02.572580 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-l7njz_kube-system(afe7c376-9383-4b14-9626-1a508d91c894)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-l7njz_kube-system(afe7c376-9383-4b14-9626-1a508d91c894)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c796559b6b941f3ebab4ab296270946f7f85e715d9fd5283a61d97b3a89b838f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-l7njz" podUID="afe7c376-9383-4b14-9626-1a508d91c894" Jan 13 23:49:02.573507 containerd[1676]: time="2026-01-13T23:49:02.573423353Z" level=error msg="Failed to destroy network for sandbox \"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.573765 containerd[1676]: time="2026-01-13T23:49:02.573736395Z" level=error msg="Failed to destroy network for sandbox \"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.576152 containerd[1676]: time="2026-01-13T23:49:02.576107847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d5c6fcc78-zcpg2,Uid:ebdd74f1-2523-4586-bc94-5eec26645924,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.576839 kubelet[2897]: E0113 23:49:02.576405 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.576839 kubelet[2897]: E0113 23:49:02.576455 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5c6fcc78-zcpg2" Jan 13 23:49:02.576839 kubelet[2897]: E0113 23:49:02.576473 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6d5c6fcc78-zcpg2" Jan 13 23:49:02.577393 kubelet[2897]: E0113 23:49:02.576505 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6d5c6fcc78-zcpg2_calico-system(ebdd74f1-2523-4586-bc94-5eec26645924)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6d5c6fcc78-zcpg2_calico-system(ebdd74f1-2523-4586-bc94-5eec26645924)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00c95035bf25f00de836204dd5c4a0ce74057dc6a8f3b337f560bf01685211c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6d5c6fcc78-zcpg2" podUID="ebdd74f1-2523-4586-bc94-5eec26645924" Jan 13 23:49:02.577697 containerd[1676]: time="2026-01-13T23:49:02.577098532Z" level=error msg="Failed to destroy network for sandbox \"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.583038 containerd[1676]: time="2026-01-13T23:49:02.582943240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-h454c,Uid:4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.583208 kubelet[2897]: E0113 23:49:02.583150 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.583551 kubelet[2897]: E0113 23:49:02.583227 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" Jan 13 23:49:02.583551 kubelet[2897]: E0113 23:49:02.583245 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" Jan 13 23:49:02.583551 kubelet[2897]: E0113 23:49:02.583287 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"553f514956c60eac2e3d2244054a378dba38ebcc52eccde9f8791bb6d498e88b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:02.584312 containerd[1676]: time="2026-01-13T23:49:02.584188486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d475c445-nfbnr,Uid:705f5b22-6117-46b0-94d6-546618492a26,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.584436 kubelet[2897]: E0113 23:49:02.584344 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.584436 kubelet[2897]: E0113 23:49:02.584377 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" Jan 13 23:49:02.584436 kubelet[2897]: E0113 23:49:02.584392 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" Jan 13 23:49:02.584950 kubelet[2897]: E0113 23:49:02.584417 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f133b0d7adcafd1b66498736f1686f4cd27f2d9922153f14c045a8ddff4c6007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:02.585385 containerd[1676]: time="2026-01-13T23:49:02.585324292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n95xs,Uid:a8c70103-7cce-4b73-89d9-3436fcb8e708,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.585508 kubelet[2897]: E0113 23:49:02.585470 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.585569 kubelet[2897]: E0113 23:49:02.585506 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n95xs" Jan 13 23:49:02.585569 kubelet[2897]: E0113 23:49:02.585522 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-n95xs" Jan 13 23:49:02.585569 kubelet[2897]: E0113 23:49:02.585558 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-n95xs_kube-system(a8c70103-7cce-4b73-89d9-3436fcb8e708)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-n95xs_kube-system(a8c70103-7cce-4b73-89d9-3436fcb8e708)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9711504e949d88bfbef921de4a50d92cf9571be0d55fcfbf072350cf31177e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-n95xs" podUID="a8c70103-7cce-4b73-89d9-3436fcb8e708" Jan 13 23:49:02.586870 containerd[1676]: time="2026-01-13T23:49:02.586774739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-qj4hz,Uid:feb12ef9-da4b-41c3-8609-097b9429383b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.586987 kubelet[2897]: E0113 23:49:02.586902 2897 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 13 23:49:02.586987 kubelet[2897]: E0113 23:49:02.586942 2897 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" Jan 13 23:49:02.586987 kubelet[2897]: E0113 23:49:02.586973 2897 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" Jan 13 23:49:02.587064 kubelet[2897]: E0113 23:49:02.587006 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8069615f4825a983c2046d0ca748b5452b3301ef8af1e543c918133a817f9e9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:03.355469 containerd[1676]: time="2026-01-13T23:49:03.355420559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 13 23:49:03.420550 systemd[1]: run-netns-cni\x2d7c8b40cf\x2d3770\x2d4ac3\x2d8251\x2dbcb63e7474cc.mount: Deactivated successfully. Jan 13 23:49:03.420648 systemd[1]: run-netns-cni\x2da1b90978\x2dd164\x2dc16d\x2d03f9\x2d089c44b15b01.mount: Deactivated successfully. Jan 13 23:49:03.420699 systemd[1]: run-netns-cni\x2db29b1a9b\x2d43e1\x2da148\x2d4953\x2d2ca1ad583088.mount: Deactivated successfully. Jan 13 23:49:03.420740 systemd[1]: run-netns-cni\x2d65ea1957\x2d49d7\x2d8011\x2d3ff0\x2d9cf5c4e42503.mount: Deactivated successfully. Jan 13 23:49:03.420781 systemd[1]: run-netns-cni\x2d6b86dcc1\x2d9e05\x2d485f\x2d53af\x2d0848d6665780.mount: Deactivated successfully. Jan 13 23:49:04.011763 kubelet[2897]: I0113 23:49:04.011688 2897 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 23:49:04.037000 audit[3976]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:04.037000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcebd43a0 a2=0 a3=1 items=0 ppid=3065 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:04.037000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:04.046000 audit[3976]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:04.046000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffcebd43a0 a2=0 a3=1 items=0 ppid=3065 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:04.046000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:07.727588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050727090.mount: Deactivated successfully. Jan 13 23:49:07.749526 containerd[1676]: time="2026-01-13T23:49:07.749399910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:49:07.751578 containerd[1676]: time="2026-01-13T23:49:07.751488641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 13 23:49:07.754582 containerd[1676]: time="2026-01-13T23:49:07.754526455Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:49:07.757562 containerd[1676]: time="2026-01-13T23:49:07.757482230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 23:49:07.758215 containerd[1676]: time="2026-01-13T23:49:07.758089593Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.402623673s" Jan 13 23:49:07.758215 containerd[1676]: time="2026-01-13T23:49:07.758124673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 13 23:49:07.769798 containerd[1676]: time="2026-01-13T23:49:07.769765890Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 13 23:49:07.787559 containerd[1676]: time="2026-01-13T23:49:07.787517858Z" level=info msg="Container 43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:07.799464 containerd[1676]: time="2026-01-13T23:49:07.799400676Z" level=info msg="CreateContainer within sandbox \"a2c9d48b93feb545b4d2f5260e80411b13d2c7d73aad7f7397d228c0feb5a95f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d\"" Jan 13 23:49:07.802063 containerd[1676]: time="2026-01-13T23:49:07.800305081Z" level=info msg="StartContainer for \"43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d\"" Jan 13 23:49:07.802063 containerd[1676]: time="2026-01-13T23:49:07.801761728Z" level=info msg="connecting to shim 43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d" address="unix:///run/containerd/s/30133a9a7c87b1636240b25ccb840d163e6da575b081d6cbba81626518aecab8" protocol=ttrpc version=3 Jan 13 23:49:07.828410 systemd[1]: Started cri-containerd-43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d.scope - libcontainer container 43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d. Jan 13 23:49:07.894000 audit: BPF prog-id=172 op=LOAD Jan 13 23:49:07.896482 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 13 23:49:07.896535 kernel: audit: type=1334 audit(1768348147.894:578): prog-id=172 op=LOAD Jan 13 23:49:07.896565 kernel: audit: type=1300 audit(1768348147.894:578): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.894000 audit[3986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.894000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.902091 kernel: audit: type=1327 audit(1768348147.894:578): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.895000 audit: BPF prog-id=173 op=LOAD Jan 13 23:49:07.895000 audit[3986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.905785 kernel: audit: type=1334 audit(1768348147.895:579): prog-id=173 op=LOAD Jan 13 23:49:07.905837 kernel: audit: type=1300 audit(1768348147.895:579): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.905867 kernel: audit: type=1327 audit(1768348147.895:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.895000 audit: BPF prog-id=173 op=UNLOAD Jan 13 23:49:07.909400 kernel: audit: type=1334 audit(1768348147.895:580): prog-id=173 op=UNLOAD Jan 13 23:49:07.895000 audit[3986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.912275 kernel: audit: type=1300 audit(1768348147.895:580): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.915307 kernel: audit: type=1327 audit(1768348147.895:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.895000 audit: BPF prog-id=172 op=UNLOAD Jan 13 23:49:07.916166 kernel: audit: type=1334 audit(1768348147.895:581): prog-id=172 op=UNLOAD Jan 13 23:49:07.895000 audit[3986]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.895000 audit: BPF prog-id=174 op=LOAD Jan 13 23:49:07.895000 audit[3986]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3437 pid=3986 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:07.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433323734663861306464343032306364336336646665363265643037 Jan 13 23:49:07.933135 containerd[1676]: time="2026-01-13T23:49:07.933096894Z" level=info msg="StartContainer for \"43274f8a0dd4020cd3c6dfe62ed071b800ce92094f441b8b38f3aa407fdf831d\" returns successfully" Jan 13 23:49:08.075387 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 13 23:49:08.075564 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 13 23:49:08.215575 kubelet[2897]: I0113 23:49:08.215523 2897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-ca-bundle\") pod \"ebdd74f1-2523-4586-bc94-5eec26645924\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " Jan 13 23:49:08.215575 kubelet[2897]: I0113 23:49:08.215577 2897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw6v6\" (UniqueName: \"kubernetes.io/projected/ebdd74f1-2523-4586-bc94-5eec26645924-kube-api-access-vw6v6\") pod \"ebdd74f1-2523-4586-bc94-5eec26645924\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " Jan 13 23:49:08.215947 kubelet[2897]: I0113 23:49:08.215624 2897 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-backend-key-pair\") pod \"ebdd74f1-2523-4586-bc94-5eec26645924\" (UID: \"ebdd74f1-2523-4586-bc94-5eec26645924\") " Jan 13 23:49:08.217345 kubelet[2897]: I0113 23:49:08.216810 2897 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ebdd74f1-2523-4586-bc94-5eec26645924" (UID: "ebdd74f1-2523-4586-bc94-5eec26645924"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 13 23:49:08.219043 kubelet[2897]: I0113 23:49:08.218815 2897 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ebdd74f1-2523-4586-bc94-5eec26645924" (UID: "ebdd74f1-2523-4586-bc94-5eec26645924"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 13 23:49:08.220519 kubelet[2897]: I0113 23:49:08.220473 2897 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebdd74f1-2523-4586-bc94-5eec26645924-kube-api-access-vw6v6" (OuterVolumeSpecName: "kube-api-access-vw6v6") pod "ebdd74f1-2523-4586-bc94-5eec26645924" (UID: "ebdd74f1-2523-4586-bc94-5eec26645924"). InnerVolumeSpecName "kube-api-access-vw6v6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 13 23:49:08.316402 kubelet[2897]: I0113 23:49:08.316345 2897 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vw6v6\" (UniqueName: \"kubernetes.io/projected/ebdd74f1-2523-4586-bc94-5eec26645924-kube-api-access-vw6v6\") on node \"ci-4578-0-0-p-89582bef9b\" DevicePath \"\"" Jan 13 23:49:08.316402 kubelet[2897]: I0113 23:49:08.316389 2897 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-backend-key-pair\") on node \"ci-4578-0-0-p-89582bef9b\" DevicePath \"\"" Jan 13 23:49:08.316402 kubelet[2897]: I0113 23:49:08.316399 2897 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebdd74f1-2523-4586-bc94-5eec26645924-whisker-ca-bundle\") on node \"ci-4578-0-0-p-89582bef9b\" DevicePath \"\"" Jan 13 23:49:08.371823 systemd[1]: Removed slice kubepods-besteffort-podebdd74f1_2523_4586_bc94_5eec26645924.slice - libcontainer container kubepods-besteffort-podebdd74f1_2523_4586_bc94_5eec26645924.slice. Jan 13 23:49:08.386543 kubelet[2897]: I0113 23:49:08.386479 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7wwq4" podStartSLOduration=1.7489394329999999 podStartE2EDuration="19.386460443s" podCreationTimestamp="2026-01-13 23:48:49 +0000 UTC" firstStartedPulling="2026-01-13 23:48:50.121186346 +0000 UTC m=+22.949528518" lastFinishedPulling="2026-01-13 23:49:07.758707316 +0000 UTC m=+40.587049528" observedRunningTime="2026-01-13 23:49:08.386346402 +0000 UTC m=+41.214688614" watchObservedRunningTime="2026-01-13 23:49:08.386460443 +0000 UTC m=+41.214802655" Jan 13 23:49:08.446911 systemd[1]: Created slice kubepods-besteffort-pod5a088dfa_d478_4f72_9896_431d1ff39b0b.slice - libcontainer container kubepods-besteffort-pod5a088dfa_d478_4f72_9896_431d1ff39b0b.slice. Jan 13 23:49:08.518342 kubelet[2897]: I0113 23:49:08.518257 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5a088dfa-d478-4f72-9896-431d1ff39b0b-whisker-backend-key-pair\") pod \"whisker-9f6f7744d-z7hgg\" (UID: \"5a088dfa-d478-4f72-9896-431d1ff39b0b\") " pod="calico-system/whisker-9f6f7744d-z7hgg" Jan 13 23:49:08.518342 kubelet[2897]: I0113 23:49:08.518303 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vt5w\" (UniqueName: \"kubernetes.io/projected/5a088dfa-d478-4f72-9896-431d1ff39b0b-kube-api-access-6vt5w\") pod \"whisker-9f6f7744d-z7hgg\" (UID: \"5a088dfa-d478-4f72-9896-431d1ff39b0b\") " pod="calico-system/whisker-9f6f7744d-z7hgg" Jan 13 23:49:08.518342 kubelet[2897]: I0113 23:49:08.518329 2897 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a088dfa-d478-4f72-9896-431d1ff39b0b-whisker-ca-bundle\") pod \"whisker-9f6f7744d-z7hgg\" (UID: \"5a088dfa-d478-4f72-9896-431d1ff39b0b\") " pod="calico-system/whisker-9f6f7744d-z7hgg" Jan 13 23:49:08.730337 systemd[1]: var-lib-kubelet-pods-ebdd74f1\x2d2523\x2d4586\x2dbc94\x2d5eec26645924-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvw6v6.mount: Deactivated successfully. Jan 13 23:49:08.730438 systemd[1]: var-lib-kubelet-pods-ebdd74f1\x2d2523\x2d4586\x2dbc94\x2d5eec26645924-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 13 23:49:08.751792 containerd[1676]: time="2026-01-13T23:49:08.751735519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f6f7744d-z7hgg,Uid:5a088dfa-d478-4f72-9896-431d1ff39b0b,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:08.887250 systemd-networkd[1586]: cali59f2e409aef: Link UP Jan 13 23:49:08.888348 systemd-networkd[1586]: cali59f2e409aef: Gained carrier Jan 13 23:49:08.904762 containerd[1676]: 2026-01-13 23:49:08.774 [INFO][4077] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 13 23:49:08.904762 containerd[1676]: 2026-01-13 23:49:08.794 [INFO][4077] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0 whisker-9f6f7744d- calico-system 5a088dfa-d478-4f72-9896-431d1ff39b0b 873 0 2026-01-13 23:49:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9f6f7744d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b whisker-9f6f7744d-z7hgg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali59f2e409aef [] [] }} ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-" Jan 13 23:49:08.904762 containerd[1676]: 2026-01-13 23:49:08.794 [INFO][4077] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.904762 containerd[1676]: 2026-01-13 23:49:08.838 [INFO][4093] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" HandleID="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Workload="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.839 [INFO][4093] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" HandleID="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Workload="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001376e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"whisker-9f6f7744d-z7hgg", "timestamp":"2026-01-13 23:49:08.838885468 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.839 [INFO][4093] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.839 [INFO][4093] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.839 [INFO][4093] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.849 [INFO][4093] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.855 [INFO][4093] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.859 [INFO][4093] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.861 [INFO][4093] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905008 containerd[1676]: 2026-01-13 23:49:08.863 [INFO][4093] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.863 [INFO][4093] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.865 [INFO][4093] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.871 [INFO][4093] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.877 [INFO][4093] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.129/26] block=192.168.86.128/26 handle="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.877 [INFO][4093] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.129/26] handle="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.877 [INFO][4093] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:08.905197 containerd[1676]: 2026-01-13 23:49:08.877 [INFO][4093] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.129/26] IPv6=[] ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" HandleID="k8s-pod-network.90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Workload="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.905323 containerd[1676]: 2026-01-13 23:49:08.880 [INFO][4077] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0", GenerateName:"whisker-9f6f7744d-", Namespace:"calico-system", SelfLink:"", UID:"5a088dfa-d478-4f72-9896-431d1ff39b0b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9f6f7744d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"whisker-9f6f7744d-z7hgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali59f2e409aef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:08.905323 containerd[1676]: 2026-01-13 23:49:08.880 [INFO][4077] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.129/32] ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.905387 containerd[1676]: 2026-01-13 23:49:08.880 [INFO][4077] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59f2e409aef ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.905387 containerd[1676]: 2026-01-13 23:49:08.889 [INFO][4077] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.905423 containerd[1676]: 2026-01-13 23:49:08.889 [INFO][4077] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0", GenerateName:"whisker-9f6f7744d-", Namespace:"calico-system", SelfLink:"", UID:"5a088dfa-d478-4f72-9896-431d1ff39b0b", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 49, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9f6f7744d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c", Pod:"whisker-9f6f7744d-z7hgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.86.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali59f2e409aef", MAC:"6a:2d:2c:a2:9b:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:08.905465 containerd[1676]: 2026-01-13 23:49:08.902 [INFO][4077] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" Namespace="calico-system" Pod="whisker-9f6f7744d-z7hgg" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-whisker--9f6f7744d--z7hgg-eth0" Jan 13 23:49:08.925845 containerd[1676]: time="2026-01-13T23:49:08.925782935Z" level=info msg="connecting to shim 90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c" address="unix:///run/containerd/s/c4139362a069bb0591eccbc602ae6e34807f1336d22e7f5d7d0c661bde52f97b" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:08.957220 systemd[1]: Started cri-containerd-90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c.scope - libcontainer container 90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c. Jan 13 23:49:08.965000 audit: BPF prog-id=175 op=LOAD Jan 13 23:49:08.966000 audit: BPF prog-id=176 op=LOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=176 op=UNLOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=177 op=LOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=178 op=LOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=178 op=UNLOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=177 op=UNLOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.966000 audit: BPF prog-id=179 op=LOAD Jan 13 23:49:08.966000 audit[4129]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4118 pid=4129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:08.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930643833373031616535366235616261373462343366366361616634 Jan 13 23:49:08.987945 containerd[1676]: time="2026-01-13T23:49:08.987903720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9f6f7744d-z7hgg,Uid:5a088dfa-d478-4f72-9896-431d1ff39b0b,Namespace:calico-system,Attempt:0,} returns sandbox id \"90d83701ae56b5aba74b43f6caaf42f91d76c1a3aa275bd95b1b7af146c3311c\"" Jan 13 23:49:08.989624 containerd[1676]: time="2026-01-13T23:49:08.989583249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:49:09.260000 kubelet[2897]: I0113 23:49:09.259889 2897 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebdd74f1-2523-4586-bc94-5eec26645924" path="/var/lib/kubelet/pods/ebdd74f1-2523-4586-bc94-5eec26645924/volumes" Jan 13 23:49:09.326917 containerd[1676]: time="2026-01-13T23:49:09.326741067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:09.328248 containerd[1676]: time="2026-01-13T23:49:09.328146994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:49:09.328325 containerd[1676]: time="2026-01-13T23:49:09.328226274Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:09.328512 kubelet[2897]: E0113 23:49:09.328472 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:09.328596 kubelet[2897]: E0113 23:49:09.328526 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:09.328834 kubelet[2897]: E0113 23:49:09.328751 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb5fe9369c744a3f8b622ec6bf599501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:09.330880 containerd[1676]: time="2026-01-13T23:49:09.330849407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:49:09.566000 audit: BPF prog-id=180 op=LOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe5ecb98 a2=98 a3=fffffe5ecb88 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.566000 audit: BPF prog-id=180 op=UNLOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffe5ecb68 a3=0 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.566000 audit: BPF prog-id=181 op=LOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe5eca48 a2=74 a3=95 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.566000 audit: BPF prog-id=181 op=UNLOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.566000 audit: BPF prog-id=182 op=LOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffe5eca78 a2=40 a3=fffffe5ecaa8 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.566000 audit: BPF prog-id=182 op=UNLOAD Jan 13 23:49:09.566000 audit[4309]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffffe5ecaa8 items=0 ppid=4166 pid=4309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.566000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 13 23:49:09.568000 audit: BPF prog-id=183 op=LOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd5b3c238 a2=98 a3=ffffd5b3c228 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.568000 audit: BPF prog-id=183 op=UNLOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd5b3c208 a3=0 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.568000 audit: BPF prog-id=184 op=LOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b3bec8 a2=74 a3=95 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.568000 audit: BPF prog-id=184 op=UNLOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.568000 audit: BPF prog-id=185 op=LOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b3bf28 a2=94 a3=2 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.568000 audit: BPF prog-id=185 op=UNLOAD Jan 13 23:49:09.568000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.568000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.648420 containerd[1676]: time="2026-01-13T23:49:09.648107007Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:09.653646 containerd[1676]: time="2026-01-13T23:49:09.653601394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:49:09.653856 containerd[1676]: time="2026-01-13T23:49:09.653677554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:09.654093 kubelet[2897]: E0113 23:49:09.654031 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:09.654093 kubelet[2897]: E0113 23:49:09.654086 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:09.654234 kubelet[2897]: E0113 23:49:09.654191 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:09.655436 kubelet[2897]: E0113 23:49:09.655363 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:49:09.665000 audit: BPF prog-id=186 op=LOAD Jan 13 23:49:09.665000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd5b3bee8 a2=40 a3=ffffd5b3bf18 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.665000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.666000 audit: BPF prog-id=186 op=UNLOAD Jan 13 23:49:09.666000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd5b3bf18 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.666000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=187 op=LOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b3bef8 a2=94 a3=4 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=187 op=UNLOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=188 op=LOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd5b3bd38 a2=94 a3=5 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=188 op=UNLOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=189 op=LOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b3bf68 a2=94 a3=6 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.675000 audit: BPF prog-id=189 op=UNLOAD Jan 13 23:49:09.675000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.676000 audit: BPF prog-id=190 op=LOAD Jan 13 23:49:09.676000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd5b3b738 a2=94 a3=83 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.676000 audit: BPF prog-id=191 op=LOAD Jan 13 23:49:09.676000 audit[4310]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd5b3b4f8 a2=94 a3=2 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.676000 audit: BPF prog-id=191 op=UNLOAD Jan 13 23:49:09.676000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.676000 audit: BPF prog-id=190 op=UNLOAD Jan 13 23:49:09.676000 audit[4310]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=136ce620 a3=136c1b00 items=0 ppid=4166 pid=4310 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 13 23:49:09.685000 audit: BPF prog-id=192 op=LOAD Jan 13 23:49:09.685000 audit[4313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7999988 a2=98 a3=ffffd7999978 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.685000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.685000 audit: BPF prog-id=192 op=UNLOAD Jan 13 23:49:09.685000 audit[4313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd7999958 a3=0 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.685000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.685000 audit: BPF prog-id=193 op=LOAD Jan 13 23:49:09.685000 audit[4313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7999838 a2=74 a3=95 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.685000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.686000 audit: BPF prog-id=193 op=UNLOAD Jan 13 23:49:09.686000 audit[4313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.686000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.686000 audit: BPF prog-id=194 op=LOAD Jan 13 23:49:09.686000 audit[4313]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd7999868 a2=40 a3=ffffd7999898 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.686000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.686000 audit: BPF prog-id=194 op=UNLOAD Jan 13 23:49:09.686000 audit[4313]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd7999898 items=0 ppid=4166 pid=4313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.686000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 13 23:49:09.743586 systemd-networkd[1586]: vxlan.calico: Link UP Jan 13 23:49:09.743597 systemd-networkd[1586]: vxlan.calico: Gained carrier Jan 13 23:49:09.761000 audit: BPF prog-id=195 op=LOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc64c37f8 a2=98 a3=ffffc64c37e8 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=195 op=UNLOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc64c37c8 a3=0 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=196 op=LOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc64c34d8 a2=74 a3=95 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=196 op=UNLOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=197 op=LOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc64c3538 a2=94 a3=2 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=197 op=UNLOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=198 op=LOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc64c33b8 a2=40 a3=ffffc64c33e8 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=198 op=UNLOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc64c33e8 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=199 op=LOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc64c3508 a2=94 a3=b7 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.761000 audit: BPF prog-id=199 op=UNLOAD Jan 13 23:49:09.761000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.761000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.762000 audit: BPF prog-id=200 op=LOAD Jan 13 23:49:09.762000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc64c2bb8 a2=94 a3=2 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.762000 audit: BPF prog-id=200 op=UNLOAD Jan 13 23:49:09.762000 audit[4339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.762000 audit: BPF prog-id=201 op=LOAD Jan 13 23:49:09.762000 audit[4339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc64c2d48 a2=94 a3=30 items=0 ppid=4166 pid=4339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.762000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 13 23:49:09.765000 audit: BPF prog-id=202 op=LOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc24b4498 a2=98 a3=ffffc24b4488 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.765000 audit: BPF prog-id=202 op=UNLOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc24b4468 a3=0 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.765000 audit: BPF prog-id=203 op=LOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc24b4128 a2=74 a3=95 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.765000 audit: BPF prog-id=203 op=UNLOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.765000 audit: BPF prog-id=204 op=LOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc24b4188 a2=94 a3=2 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.765000 audit: BPF prog-id=204 op=UNLOAD Jan 13 23:49:09.765000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.765000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.870000 audit: BPF prog-id=205 op=LOAD Jan 13 23:49:09.870000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc24b4148 a2=40 a3=ffffc24b4178 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.870000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.871000 audit: BPF prog-id=205 op=UNLOAD Jan 13 23:49:09.871000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc24b4178 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.871000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.880000 audit: BPF prog-id=206 op=LOAD Jan 13 23:49:09.880000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc24b4158 a2=94 a3=4 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.880000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=206 op=UNLOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=207 op=LOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc24b3f98 a2=94 a3=5 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=207 op=UNLOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=208 op=LOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc24b41c8 a2=94 a3=6 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=208 op=UNLOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.881000 audit: BPF prog-id=209 op=LOAD Jan 13 23:49:09.881000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc24b3998 a2=94 a3=83 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.882000 audit: BPF prog-id=210 op=LOAD Jan 13 23:49:09.882000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc24b3758 a2=94 a3=2 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.882000 audit: BPF prog-id=210 op=UNLOAD Jan 13 23:49:09.882000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.882000 audit: BPF prog-id=209 op=UNLOAD Jan 13 23:49:09.882000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=17130620 a3=17123b00 items=0 ppid=4166 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 13 23:49:09.902000 audit: BPF prog-id=201 op=UNLOAD Jan 13 23:49:09.902000 audit[4166]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000710cc0 a2=0 a3=0 items=0 ppid=4155 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.902000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 13 23:49:09.943000 audit[4369]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4369 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:09.943000 audit[4369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe3f30430 a2=0 a3=ffffa3059fa8 items=0 ppid=4166 pid=4369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.943000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:09.946000 audit[4374]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4374 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:09.946000 audit[4374]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc0d191e0 a2=0 a3=ffffaba75fa8 items=0 ppid=4166 pid=4374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.946000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:09.951000 audit[4370]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:09.951000 audit[4370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe7b38ac0 a2=0 a3=ffff817ecfa8 items=0 ppid=4166 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.951000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:09.953000 audit[4371]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4371 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:09.953000 audit[4371]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffc603a720 a2=0 a3=ffff905dbfa8 items=0 ppid=4166 pid=4371 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:09.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:10.375081 kubelet[2897]: E0113 23:49:10.375038 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:49:10.394000 audit[4384]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:10.394000 audit[4384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe783acd0 a2=0 a3=1 items=0 ppid=3065 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:10.402000 audit[4384]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4384 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:10.402000 audit[4384]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe783acd0 a2=0 a3=1 items=0 ppid=3065 pid=4384 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:10.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:10.797348 systemd-networkd[1586]: cali59f2e409aef: Gained IPv6LL Jan 13 23:49:11.117191 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Jan 13 23:49:13.258880 containerd[1676]: time="2026-01-13T23:49:13.258299204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-qj4hz,Uid:feb12ef9-da4b-41c3-8609-097b9429383b,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:49:13.390201 systemd-networkd[1586]: cali798fbabb925: Link UP Jan 13 23:49:13.390399 systemd-networkd[1586]: cali798fbabb925: Gained carrier Jan 13 23:49:13.401663 containerd[1676]: 2026-01-13 23:49:13.299 [INFO][4387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0 calico-apiserver-5ddfc947b7- calico-apiserver feb12ef9-da4b-41c3-8609-097b9429383b 794 0 2026-01-13 23:48:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddfc947b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b calico-apiserver-5ddfc947b7-qj4hz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali798fbabb925 [] [] }} ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-" Jan 13 23:49:13.401663 containerd[1676]: 2026-01-13 23:49:13.299 [INFO][4387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.401663 containerd[1676]: 2026-01-13 23:49:13.322 [INFO][4402] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" HandleID="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.322 [INFO][4402] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" HandleID="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578-0-0-p-89582bef9b", "pod":"calico-apiserver-5ddfc947b7-qj4hz", "timestamp":"2026-01-13 23:49:13.322438599 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.322 [INFO][4402] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.322 [INFO][4402] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.322 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.332 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.348 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.358 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.361 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.401845 containerd[1676]: 2026-01-13 23:49:13.366 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.366 [INFO][4402] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.369 [INFO][4402] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.375 [INFO][4402] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.384 [INFO][4402] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.130/26] block=192.168.86.128/26 handle="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.384 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.130/26] handle="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.384 [INFO][4402] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:13.402106 containerd[1676]: 2026-01-13 23:49:13.384 [INFO][4402] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.130/26] IPv6=[] ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" HandleID="k8s-pod-network.9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.402238 containerd[1676]: 2026-01-13 23:49:13.387 [INFO][4387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0", GenerateName:"calico-apiserver-5ddfc947b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb12ef9-da4b-41c3-8609-097b9429383b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddfc947b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"calico-apiserver-5ddfc947b7-qj4hz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali798fbabb925", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:13.402288 containerd[1676]: 2026-01-13 23:49:13.387 [INFO][4387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.130/32] ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.402288 containerd[1676]: 2026-01-13 23:49:13.387 [INFO][4387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali798fbabb925 ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.402288 containerd[1676]: 2026-01-13 23:49:13.389 [INFO][4387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.402345 containerd[1676]: 2026-01-13 23:49:13.389 [INFO][4387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0", GenerateName:"calico-apiserver-5ddfc947b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb12ef9-da4b-41c3-8609-097b9429383b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddfc947b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d", Pod:"calico-apiserver-5ddfc947b7-qj4hz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali798fbabb925", MAC:"d6:f1:41:49:12:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:13.402394 containerd[1676]: 2026-01-13 23:49:13.399 [INFO][4387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-qj4hz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--qj4hz-eth0" Jan 13 23:49:13.412000 audit[4420]: NETFILTER_CFG table=filter:127 family=2 entries=50 op=nft_register_chain pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:13.414014 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 13 23:49:13.414054 kernel: audit: type=1325 audit(1768348153.412:659): table=filter:127 family=2 entries=50 op=nft_register_chain pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:13.412000 audit[4420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd2e5afc0 a2=0 a3=ffffb9690fa8 items=0 ppid=4166 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.419263 kernel: audit: type=1300 audit(1768348153.412:659): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffd2e5afc0 a2=0 a3=ffffb9690fa8 items=0 ppid=4166 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.412000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:13.422010 kernel: audit: type=1327 audit(1768348153.412:659): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:13.424165 containerd[1676]: time="2026-01-13T23:49:13.424125660Z" level=info msg="connecting to shim 9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d" address="unix:///run/containerd/s/a1469839450451ff4f2654d4a67ed6c6ee0ee08dd41f725b04c820278425a41b" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:13.448398 systemd[1]: Started cri-containerd-9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d.scope - libcontainer container 9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d. Jan 13 23:49:13.457000 audit: BPF prog-id=211 op=LOAD Jan 13 23:49:13.460018 kernel: audit: type=1334 audit(1768348153.457:660): prog-id=211 op=LOAD Jan 13 23:49:13.460113 kernel: audit: type=1334 audit(1768348153.459:661): prog-id=212 op=LOAD Jan 13 23:49:13.459000 audit: BPF prog-id=212 op=LOAD Jan 13 23:49:13.459000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.463728 kernel: audit: type=1300 audit(1768348153.459:661): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.463830 kernel: audit: type=1327 audit(1768348153.459:661): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.466587 kernel: audit: type=1334 audit(1768348153.459:662): prog-id=212 op=UNLOAD Jan 13 23:49:13.459000 audit: BPF prog-id=212 op=UNLOAD Jan 13 23:49:13.459000 audit[4441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.469993 kernel: audit: type=1300 audit(1768348153.459:662): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.470171 kernel: audit: type=1327 audit(1768348153.459:662): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.459000 audit: BPF prog-id=213 op=LOAD Jan 13 23:49:13.459000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.459000 audit: BPF prog-id=214 op=LOAD Jan 13 23:49:13.459000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.462000 audit: BPF prog-id=214 op=UNLOAD Jan 13 23:49:13.462000 audit[4441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.462000 audit: BPF prog-id=213 op=UNLOAD Jan 13 23:49:13.462000 audit[4441]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.462000 audit: BPF prog-id=215 op=LOAD Jan 13 23:49:13.462000 audit[4441]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4429 pid=4441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:13.462000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966383133356465633566366332393264666663383161353337626364 Jan 13 23:49:13.495269 containerd[1676]: time="2026-01-13T23:49:13.495212849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-qj4hz,Uid:feb12ef9-da4b-41c3-8609-097b9429383b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f8135dec5f6c292dffc81a537bcd8bf9ce1f776039a62b046165ca5f9b76b4d\"" Jan 13 23:49:13.498602 containerd[1676]: time="2026-01-13T23:49:13.498570826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:13.850876 containerd[1676]: time="2026-01-13T23:49:13.850810758Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:13.852230 containerd[1676]: time="2026-01-13T23:49:13.852189365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:13.852278 containerd[1676]: time="2026-01-13T23:49:13.852226005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:13.852479 kubelet[2897]: E0113 23:49:13.852412 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:13.853798 kubelet[2897]: E0113 23:49:13.852484 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:13.853798 kubelet[2897]: E0113 23:49:13.852610 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzspl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:13.854161 kubelet[2897]: E0113 23:49:13.854081 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:14.257875 containerd[1676]: time="2026-01-13T23:49:14.257831560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d475c445-nfbnr,Uid:705f5b22-6117-46b0-94d6-546618492a26,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:14.359302 systemd-networkd[1586]: cali2f2e9503ea6: Link UP Jan 13 23:49:14.360067 systemd-networkd[1586]: cali2f2e9503ea6: Gained carrier Jan 13 23:49:14.374104 containerd[1676]: 2026-01-13 23:49:14.293 [INFO][4466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0 calico-kube-controllers-65d475c445- calico-system 705f5b22-6117-46b0-94d6-546618492a26 804 0 2026-01-13 23:48:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65d475c445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b calico-kube-controllers-65d475c445-nfbnr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2f2e9503ea6 [] [] }} ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-" Jan 13 23:49:14.374104 containerd[1676]: 2026-01-13 23:49:14.294 [INFO][4466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.374104 containerd[1676]: 2026-01-13 23:49:14.316 [INFO][4481] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" HandleID="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.316 [INFO][4481] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" HandleID="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004921b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"calico-kube-controllers-65d475c445-nfbnr", "timestamp":"2026-01-13 23:49:14.316595529 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.316 [INFO][4481] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.316 [INFO][4481] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.316 [INFO][4481] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.326 [INFO][4481] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.331 [INFO][4481] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.335 [INFO][4481] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.337 [INFO][4481] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374498 containerd[1676]: 2026-01-13 23:49:14.339 [INFO][4481] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.339 [INFO][4481] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.341 [INFO][4481] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.345 [INFO][4481] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.353 [INFO][4481] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.131/26] block=192.168.86.128/26 handle="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.353 [INFO][4481] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.131/26] handle="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.353 [INFO][4481] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:14.374690 containerd[1676]: 2026-01-13 23:49:14.353 [INFO][4481] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.131/26] IPv6=[] ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" HandleID="k8s-pod-network.7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.374812 containerd[1676]: 2026-01-13 23:49:14.355 [INFO][4466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0", GenerateName:"calico-kube-controllers-65d475c445-", Namespace:"calico-system", SelfLink:"", UID:"705f5b22-6117-46b0-94d6-546618492a26", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d475c445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"calico-kube-controllers-65d475c445-nfbnr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f2e9503ea6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:14.374859 containerd[1676]: 2026-01-13 23:49:14.355 [INFO][4466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.131/32] ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.374859 containerd[1676]: 2026-01-13 23:49:14.355 [INFO][4466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2f2e9503ea6 ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.374859 containerd[1676]: 2026-01-13 23:49:14.360 [INFO][4466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.375065 containerd[1676]: 2026-01-13 23:49:14.360 [INFO][4466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0", GenerateName:"calico-kube-controllers-65d475c445-", Namespace:"calico-system", SelfLink:"", UID:"705f5b22-6117-46b0-94d6-546618492a26", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65d475c445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b", Pod:"calico-kube-controllers-65d475c445-nfbnr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.86.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2f2e9503ea6", MAC:"96:80:44:e9:16:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:14.375123 containerd[1676]: 2026-01-13 23:49:14.371 [INFO][4466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" Namespace="calico-system" Pod="calico-kube-controllers-65d475c445-nfbnr" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--kube--controllers--65d475c445--nfbnr-eth0" Jan 13 23:49:14.383371 kubelet[2897]: E0113 23:49:14.383315 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:14.402000 audit[4496]: NETFILTER_CFG table=filter:128 family=2 entries=40 op=nft_register_chain pid=4496 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:14.402000 audit[4496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20764 a0=3 a1=ffffc778d0d0 a2=0 a3=ffff8f740fa8 items=0 ppid=4166 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.402000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:14.412436 containerd[1676]: time="2026-01-13T23:49:14.412391080Z" level=info msg="connecting to shim 7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b" address="unix:///run/containerd/s/95b7f1831aaaa40ed45de2b0aa0ea3891f69cb2f7e42ba55ae611be1aced420a" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:14.416000 audit[4511]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:14.416000 audit[4511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffda0712e0 a2=0 a3=1 items=0 ppid=3065 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.416000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:14.421000 audit[4511]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:14.421000 audit[4511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffda0712e0 a2=0 a3=1 items=0 ppid=3065 pid=4511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.421000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:14.440448 systemd[1]: Started cri-containerd-7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b.scope - libcontainer container 7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b. Jan 13 23:49:14.449000 audit: BPF prog-id=216 op=LOAD Jan 13 23:49:14.450000 audit: BPF prog-id=217 op=LOAD Jan 13 23:49:14.450000 audit[4518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.450000 audit: BPF prog-id=217 op=UNLOAD Jan 13 23:49:14.450000 audit[4518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.450000 audit: BPF prog-id=218 op=LOAD Jan 13 23:49:14.450000 audit[4518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.450000 audit: BPF prog-id=219 op=LOAD Jan 13 23:49:14.450000 audit[4518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.451000 audit: BPF prog-id=219 op=UNLOAD Jan 13 23:49:14.451000 audit[4518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.451000 audit: BPF prog-id=218 op=UNLOAD Jan 13 23:49:14.451000 audit[4518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.451000 audit: BPF prog-id=220 op=LOAD Jan 13 23:49:14.451000 audit[4518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4506 pid=4518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:14.451000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766653539643562663439333566666330333763643262636634323035 Jan 13 23:49:14.474136 containerd[1676]: time="2026-01-13T23:49:14.474054063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65d475c445-nfbnr,Uid:705f5b22-6117-46b0-94d6-546618492a26,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fe59d5bf4935ffc037cd2bcf4205e4d55ec9667dbe02f5ab958bb6f7b27583b\"" Jan 13 23:49:14.476217 containerd[1676]: time="2026-01-13T23:49:14.476181113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:49:14.637209 systemd-networkd[1586]: cali798fbabb925: Gained IPv6LL Jan 13 23:49:14.806201 containerd[1676]: time="2026-01-13T23:49:14.806158416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:14.807708 containerd[1676]: time="2026-01-13T23:49:14.807635863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:49:14.807816 containerd[1676]: time="2026-01-13T23:49:14.807729984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:14.807988 kubelet[2897]: E0113 23:49:14.807883 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:14.807988 kubelet[2897]: E0113 23:49:14.807928 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:14.808111 kubelet[2897]: E0113 23:49:14.808061 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:14.809419 kubelet[2897]: E0113 23:49:14.809316 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:15.385419 kubelet[2897]: E0113 23:49:15.385344 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:15.385419 kubelet[2897]: E0113 23:49:15.385350 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:15.661129 systemd-networkd[1586]: cali2f2e9503ea6: Gained IPv6LL Jan 13 23:49:16.258425 containerd[1676]: time="2026-01-13T23:49:16.258231837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n95xs,Uid:a8c70103-7cce-4b73-89d9-3436fcb8e708,Namespace:kube-system,Attempt:0,}" Jan 13 23:49:16.258926 containerd[1676]: time="2026-01-13T23:49:16.258820280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-h454c,Uid:4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f,Namespace:calico-apiserver,Attempt:0,}" Jan 13 23:49:16.258926 containerd[1676]: time="2026-01-13T23:49:16.258903240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7njz,Uid:afe7c376-9383-4b14-9626-1a508d91c894,Namespace:kube-system,Attempt:0,}" Jan 13 23:49:16.386733 systemd-networkd[1586]: calia46258b2da0: Link UP Jan 13 23:49:16.387450 systemd-networkd[1586]: calia46258b2da0: Gained carrier Jan 13 23:49:16.393235 kubelet[2897]: E0113 23:49:16.393188 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:16.408846 containerd[1676]: 2026-01-13 23:49:16.311 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0 coredns-668d6bf9bc- kube-system a8c70103-7cce-4b73-89d9-3436fcb8e708 802 0 2026-01-13 23:48:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b coredns-668d6bf9bc-n95xs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia46258b2da0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-" Jan 13 23:49:16.408846 containerd[1676]: 2026-01-13 23:49:16.312 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.408846 containerd[1676]: 2026-01-13 23:49:16.341 [INFO][4590] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" HandleID="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.341 [INFO][4590] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" HandleID="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000508ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"coredns-668d6bf9bc-n95xs", "timestamp":"2026-01-13 23:49:16.340999564 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.341 [INFO][4590] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.341 [INFO][4590] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.341 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.355 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.360 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.365 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.367 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409444 containerd[1676]: 2026-01-13 23:49:16.370 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.370 [INFO][4590] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.371 [INFO][4590] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322 Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.375 [INFO][4590] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4590] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.132/26] block=192.168.86.128/26 handle="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.132/26] handle="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4590] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:16.409667 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4590] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.132/26] IPv6=[] ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" HandleID="k8s-pod-network.b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.384 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a8c70103-7cce-4b73-89d9-3436fcb8e708", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"coredns-668d6bf9bc-n95xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia46258b2da0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.384 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.132/32] ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.384 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia46258b2da0 ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.387 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.388 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a8c70103-7cce-4b73-89d9-3436fcb8e708", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322", Pod:"coredns-668d6bf9bc-n95xs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia46258b2da0", MAC:"96:b6:1d:54:cf:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.409880 containerd[1676]: 2026-01-13 23:49:16.405 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" Namespace="kube-system" Pod="coredns-668d6bf9bc-n95xs" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--n95xs-eth0" Jan 13 23:49:16.423000 audit[4626]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4626 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:16.423000 audit[4626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffc0fe9970 a2=0 a3=ffffa6619fa8 items=0 ppid=4166 pid=4626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.423000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:16.438082 containerd[1676]: time="2026-01-13T23:49:16.438016641Z" level=info msg="connecting to shim b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322" address="unix:///run/containerd/s/7648426c6d62fa8c6aafb45bedee9b54403816acb50ccd8538b6bed608c86422" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:16.463169 systemd[1]: Started cri-containerd-b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322.scope - libcontainer container b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322. Jan 13 23:49:16.473000 audit: BPF prog-id=221 op=LOAD Jan 13 23:49:16.474000 audit: BPF prog-id=222 op=LOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=222 op=UNLOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=223 op=LOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=224 op=LOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=224 op=UNLOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=223 op=UNLOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.474000 audit: BPF prog-id=225 op=LOAD Jan 13 23:49:16.474000 audit[4647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4636 pid=4647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234346634373435313833653935616233373537363063343934616134 Jan 13 23:49:16.503394 systemd-networkd[1586]: caliec082d2ba73: Link UP Jan 13 23:49:16.504689 systemd-networkd[1586]: caliec082d2ba73: Gained carrier Jan 13 23:49:16.512930 containerd[1676]: time="2026-01-13T23:49:16.512815449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-n95xs,Uid:a8c70103-7cce-4b73-89d9-3436fcb8e708,Namespace:kube-system,Attempt:0,} returns sandbox id \"b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322\"" Jan 13 23:49:16.517266 containerd[1676]: time="2026-01-13T23:49:16.517227910Z" level=info msg="CreateContainer within sandbox \"b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.314 [INFO][4549] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0 calico-apiserver-5ddfc947b7- calico-apiserver 4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f 805 0 2026-01-13 23:48:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddfc947b7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b calico-apiserver-5ddfc947b7-h454c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec082d2ba73 [] [] }} ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.314 [INFO][4549] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.347 [INFO][4596] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" HandleID="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.347 [INFO][4596] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" HandleID="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b1270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4578-0-0-p-89582bef9b", "pod":"calico-apiserver-5ddfc947b7-h454c", "timestamp":"2026-01-13 23:49:16.347407955 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.347 [INFO][4596] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4596] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.382 [INFO][4596] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.454 [INFO][4596] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.463 [INFO][4596] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.468 [INFO][4596] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.471 [INFO][4596] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.474 [INFO][4596] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.474 [INFO][4596] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.476 [INFO][4596] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.481 [INFO][4596] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.493 [INFO][4596] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.133/26] block=192.168.86.128/26 handle="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.493 [INFO][4596] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.133/26] handle="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.493 [INFO][4596] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:16.521583 containerd[1676]: 2026-01-13 23:49:16.494 [INFO][4596] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.133/26] IPv6=[] ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" HandleID="k8s-pod-network.761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Workload="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.499 [INFO][4549] cni-plugin/k8s.go 418: Populated endpoint ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0", GenerateName:"calico-apiserver-5ddfc947b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddfc947b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"calico-apiserver-5ddfc947b7-h454c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec082d2ba73", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.499 [INFO][4549] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.133/32] ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.499 [INFO][4549] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec082d2ba73 ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.503 [INFO][4549] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.503 [INFO][4549] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0", GenerateName:"calico-apiserver-5ddfc947b7-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddfc947b7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f", Pod:"calico-apiserver-5ddfc947b7-h454c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.86.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec082d2ba73", MAC:"82:35:76:cd:ce:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.522128 containerd[1676]: 2026-01-13 23:49:16.516 [INFO][4549] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" Namespace="calico-apiserver" Pod="calico-apiserver-5ddfc947b7-h454c" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-calico--apiserver--5ddfc947b7--h454c-eth0" Jan 13 23:49:16.531000 audit[4683]: NETFILTER_CFG table=filter:132 family=2 entries=49 op=nft_register_chain pid=4683 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:16.531000 audit[4683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25452 a0=3 a1=ffffe38852c0 a2=0 a3=ffff8b0f5fa8 items=0 ppid=4166 pid=4683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.531000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:16.539254 containerd[1676]: time="2026-01-13T23:49:16.539203218Z" level=info msg="Container ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:16.552438 containerd[1676]: time="2026-01-13T23:49:16.552308803Z" level=info msg="connecting to shim 761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f" address="unix:///run/containerd/s/94c77fec7e9c8fb8a4485393660469c69ef27bd48d5c88ca5e8664543afba1af" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:16.560587 containerd[1676]: time="2026-01-13T23:49:16.560515243Z" level=info msg="CreateContainer within sandbox \"b44f4745183e95ab375760c494aa4e649e6b59f4377113b7783692c1bce13322\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8\"" Jan 13 23:49:16.561148 containerd[1676]: time="2026-01-13T23:49:16.561081566Z" level=info msg="StartContainer for \"ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8\"" Jan 13 23:49:16.564030 containerd[1676]: time="2026-01-13T23:49:16.563991900Z" level=info msg="connecting to shim ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8" address="unix:///run/containerd/s/7648426c6d62fa8c6aafb45bedee9b54403816acb50ccd8538b6bed608c86422" protocol=ttrpc version=3 Jan 13 23:49:16.585433 systemd[1]: Started cri-containerd-761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f.scope - libcontainer container 761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f. Jan 13 23:49:16.589538 systemd[1]: Started cri-containerd-ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8.scope - libcontainer container ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8. Jan 13 23:49:16.596000 audit: BPF prog-id=226 op=LOAD Jan 13 23:49:16.598000 audit: BPF prog-id=227 op=LOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=227 op=UNLOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=228 op=LOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=229 op=LOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=229 op=UNLOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=228 op=UNLOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.598000 audit: BPF prog-id=230 op=LOAD Jan 13 23:49:16.598000 audit[4703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4692 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.598000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736316264613130653963343338636333343030376132393032343536 Jan 13 23:49:16.606000 audit: BPF prog-id=231 op=LOAD Jan 13 23:49:16.607000 audit: BPF prog-id=232 op=LOAD Jan 13 23:49:16.607000 audit[4709]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.607000 audit: BPF prog-id=232 op=UNLOAD Jan 13 23:49:16.607000 audit[4709]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.607000 audit: BPF prog-id=233 op=LOAD Jan 13 23:49:16.607000 audit[4709]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.607000 audit: BPF prog-id=234 op=LOAD Jan 13 23:49:16.607000 audit[4709]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.607000 audit: BPF prog-id=234 op=UNLOAD Jan 13 23:49:16.607000 audit[4709]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.607000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.609586 systemd-networkd[1586]: cali4bd3d9b54b0: Link UP Jan 13 23:49:16.609921 systemd-networkd[1586]: cali4bd3d9b54b0: Gained carrier Jan 13 23:49:16.609000 audit: BPF prog-id=233 op=UNLOAD Jan 13 23:49:16.609000 audit[4709]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.609000 audit: BPF prog-id=235 op=LOAD Jan 13 23:49:16.609000 audit[4709]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4636 pid=4709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6162373831616663306162303162343763653166313735616235363533 Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.331 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0 coredns-668d6bf9bc- kube-system afe7c376-9383-4b14-9626-1a508d91c894 800 0 2026-01-13 23:48:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b coredns-668d6bf9bc-l7njz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4bd3d9b54b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.331 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.364 [INFO][4607] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" HandleID="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.364 [INFO][4607] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" HandleID="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b05b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"coredns-668d6bf9bc-l7njz", "timestamp":"2026-01-13 23:49:16.364111957 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.364 [INFO][4607] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.493 [INFO][4607] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.493 [INFO][4607] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.555 [INFO][4607] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.563 [INFO][4607] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.569 [INFO][4607] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.572 [INFO][4607] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.575 [INFO][4607] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.575 [INFO][4607] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.577 [INFO][4607] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.590 [INFO][4607] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.599 [INFO][4607] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.134/26] block=192.168.86.128/26 handle="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.599 [INFO][4607] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.134/26] handle="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.599 [INFO][4607] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:16.632650 containerd[1676]: 2026-01-13 23:49:16.599 [INFO][4607] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.134/26] IPv6=[] ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" HandleID="k8s-pod-network.d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Workload="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.606 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"afe7c376-9383-4b14-9626-1a508d91c894", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"coredns-668d6bf9bc-l7njz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bd3d9b54b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.606 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.134/32] ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.606 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4bd3d9b54b0 ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.610 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.610 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"afe7c376-9383-4b14-9626-1a508d91c894", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf", Pod:"coredns-668d6bf9bc-l7njz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.86.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4bd3d9b54b0", MAC:"52:a4:1b:49:e5:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:16.633241 containerd[1676]: 2026-01-13 23:49:16.629 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" Namespace="kube-system" Pod="coredns-668d6bf9bc-l7njz" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-coredns--668d6bf9bc--l7njz-eth0" Jan 13 23:49:16.644423 containerd[1676]: time="2026-01-13T23:49:16.644389056Z" level=info msg="StartContainer for \"ab781afc0ab01b47ce1f175ab5653f102f22d15e850a61489c574d50f9e851e8\" returns successfully" Jan 13 23:49:16.645000 audit[4768]: NETFILTER_CFG table=filter:133 family=2 entries=48 op=nft_register_chain pid=4768 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:16.645000 audit[4768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22720 a0=3 a1=ffffc727f7e0 a2=0 a3=ffffb22e1fa8 items=0 ppid=4166 pid=4768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.645000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:16.649530 containerd[1676]: time="2026-01-13T23:49:16.649469601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddfc947b7-h454c,Uid:4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"761bda10e9c438cc34007a290245658473496058eb30679ee599511a23e3e47f\"" Jan 13 23:49:16.652382 containerd[1676]: time="2026-01-13T23:49:16.652015293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:16.668580 containerd[1676]: time="2026-01-13T23:49:16.668098412Z" level=info msg="connecting to shim d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf" address="unix:///run/containerd/s/366573d28496b19e26e85c1943c204576211280e354ee3f4f45c062d89cf07b5" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:16.691171 systemd[1]: Started cri-containerd-d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf.scope - libcontainer container d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf. Jan 13 23:49:16.703000 audit: BPF prog-id=236 op=LOAD Jan 13 23:49:16.704000 audit: BPF prog-id=237 op=LOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=237 op=UNLOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=238 op=LOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=239 op=LOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=239 op=UNLOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=238 op=UNLOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.704000 audit: BPF prog-id=240 op=LOAD Jan 13 23:49:16.704000 audit[4792]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4779 pid=4792 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437646163313338366431366437653137633631333133613538323063 Jan 13 23:49:16.729169 containerd[1676]: time="2026-01-13T23:49:16.729129472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-l7njz,Uid:afe7c376-9383-4b14-9626-1a508d91c894,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf\"" Jan 13 23:49:16.731996 containerd[1676]: time="2026-01-13T23:49:16.731843446Z" level=info msg="CreateContainer within sandbox \"d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 23:49:16.741127 containerd[1676]: time="2026-01-13T23:49:16.741090331Z" level=info msg="Container bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:49:16.747551 containerd[1676]: time="2026-01-13T23:49:16.747513603Z" level=info msg="CreateContainer within sandbox \"d7dac1386d16d7e17c61313a5820c21569a0d55a1e17b54d5f559b9d34c314bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e\"" Jan 13 23:49:16.748224 containerd[1676]: time="2026-01-13T23:49:16.748193646Z" level=info msg="StartContainer for \"bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e\"" Jan 13 23:49:16.749300 containerd[1676]: time="2026-01-13T23:49:16.749258851Z" level=info msg="connecting to shim bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e" address="unix:///run/containerd/s/366573d28496b19e26e85c1943c204576211280e354ee3f4f45c062d89cf07b5" protocol=ttrpc version=3 Jan 13 23:49:16.772191 systemd[1]: Started cri-containerd-bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e.scope - libcontainer container bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e. Jan 13 23:49:16.785000 audit: BPF prog-id=241 op=LOAD Jan 13 23:49:16.785000 audit: BPF prog-id=242 op=LOAD Jan 13 23:49:16.785000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.785000 audit: BPF prog-id=242 op=UNLOAD Jan 13 23:49:16.785000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.785000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.786000 audit: BPF prog-id=243 op=LOAD Jan 13 23:49:16.786000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.786000 audit: BPF prog-id=244 op=LOAD Jan 13 23:49:16.786000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.786000 audit: BPF prog-id=244 op=UNLOAD Jan 13 23:49:16.786000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.786000 audit: BPF prog-id=243 op=UNLOAD Jan 13 23:49:16.786000 audit[4819]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.786000 audit: BPF prog-id=245 op=LOAD Jan 13 23:49:16.786000 audit[4819]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4779 pid=4819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:16.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266363130356336636633643939653664323636613965616362386634 Jan 13 23:49:16.803307 containerd[1676]: time="2026-01-13T23:49:16.803102116Z" level=info msg="StartContainer for \"bf6105c6cf3d99e6d266a9eacb8f4a88d32f453b53ac97fcf2621dac92ea111e\" returns successfully" Jan 13 23:49:16.982251 containerd[1676]: time="2026-01-13T23:49:16.982179157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:16.983528 containerd[1676]: time="2026-01-13T23:49:16.983490123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:16.983635 containerd[1676]: time="2026-01-13T23:49:16.983587484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:16.983758 kubelet[2897]: E0113 23:49:16.983715 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:16.983807 kubelet[2897]: E0113 23:49:16.983768 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:16.984035 kubelet[2897]: E0113 23:49:16.983927 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8svz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:16.985396 kubelet[2897]: E0113 23:49:16.985312 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:17.258674 containerd[1676]: time="2026-01-13T23:49:17.258634476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8bgtx,Uid:9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:17.360992 systemd-networkd[1586]: cali83418ab6037: Link UP Jan 13 23:49:17.361485 systemd-networkd[1586]: cali83418ab6037: Gained carrier Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.294 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0 csi-node-driver- calico-system 9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5 699 0 2026-01-13 23:48:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b csi-node-driver-8bgtx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali83418ab6037 [] [] }} ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.295 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.319 [INFO][4868] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" HandleID="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Workload="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.319 [INFO][4868] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" HandleID="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Workload="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"csi-node-driver-8bgtx", "timestamp":"2026-01-13 23:49:17.319628576 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.319 [INFO][4868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.319 [INFO][4868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.319 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.329 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.333 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.338 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.340 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.342 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.343 [INFO][4868] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.344 [INFO][4868] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31 Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.348 [INFO][4868] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.356 [INFO][4868] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.135/26] block=192.168.86.128/26 handle="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.356 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.135/26] handle="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.356 [INFO][4868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:17.378821 containerd[1676]: 2026-01-13 23:49:17.356 [INFO][4868] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.135/26] IPv6=[] ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" HandleID="k8s-pod-network.095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Workload="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.358 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"csi-node-driver-8bgtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali83418ab6037", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.358 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.135/32] ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.358 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83418ab6037 ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.361 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.363 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31", Pod:"csi-node-driver-8bgtx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.86.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali83418ab6037", MAC:"a6:af:d4:32:60:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:17.379468 containerd[1676]: 2026-01-13 23:49:17.376 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" Namespace="calico-system" Pod="csi-node-driver-8bgtx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-csi--node--driver--8bgtx-eth0" Jan 13 23:49:17.391000 audit[4885]: NETFILTER_CFG table=filter:134 family=2 entries=56 op=nft_register_chain pid=4885 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:17.391000 audit[4885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25516 a0=3 a1=ffffee18ee50 a2=0 a3=ffff834affa8 items=0 ppid=4166 pid=4885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.391000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:17.403212 kubelet[2897]: E0113 23:49:17.403151 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:17.413776 kubelet[2897]: I0113 23:49:17.413707 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-l7njz" podStartSLOduration=43.413687719 podStartE2EDuration="43.413687719s" podCreationTimestamp="2026-01-13 23:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:49:17.413664479 +0000 UTC m=+50.242006691" watchObservedRunningTime="2026-01-13 23:49:17.413687719 +0000 UTC m=+50.242029931" Jan 13 23:49:17.415261 containerd[1676]: time="2026-01-13T23:49:17.415197006Z" level=info msg="connecting to shim 095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31" address="unix:///run/containerd/s/ad9e0356054bbb843b8028c18d4b08ebb547f2d3ab7d5375418e6036f3dcce70" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:17.438000 audit[4911]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:17.438000 audit[4911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca8407c0 a2=0 a3=1 items=0 ppid=3065 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:17.446858 kubelet[2897]: I0113 23:49:17.446478 2897 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-n95xs" podStartSLOduration=43.4464636 podStartE2EDuration="43.4464636s" podCreationTimestamp="2026-01-13 23:48:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-13 23:49:17.445850877 +0000 UTC m=+50.274193169" watchObservedRunningTime="2026-01-13 23:49:17.4464636 +0000 UTC m=+50.274805812" Jan 13 23:49:17.445000 audit[4911]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4911 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:17.445000 audit[4911]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffca8407c0 a2=0 a3=1 items=0 ppid=3065 pid=4911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:17.457294 systemd[1]: Started cri-containerd-095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31.scope - libcontainer container 095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31. Jan 13 23:49:17.475000 audit[4927]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:17.475000 audit[4927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffec27fc80 a2=0 a3=1 items=0 ppid=3065 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.475000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:17.478000 audit: BPF prog-id=246 op=LOAD Jan 13 23:49:17.478000 audit: BPF prog-id=247 op=LOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=247 op=UNLOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=248 op=LOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=249 op=LOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=249 op=UNLOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=248 op=UNLOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.478000 audit: BPF prog-id=250 op=LOAD Jan 13 23:49:17.478000 audit[4907]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4895 pid=4907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.478000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039356265313036643530613764643965313164316637326330636330 Jan 13 23:49:17.486000 audit[4927]: NETFILTER_CFG table=nat:138 family=2 entries=47 op=nft_register_chain pid=4927 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:17.486000 audit[4927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffec27fc80 a2=0 a3=1 items=0 ppid=3065 pid=4927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:17.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:17.494421 containerd[1676]: time="2026-01-13T23:49:17.494277595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8bgtx,Uid:9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"095be106d50a7dd9e11d1f72c0cc0f44fb216687d41649ee7e9f6b44a8337d31\"" Jan 13 23:49:17.495755 containerd[1676]: time="2026-01-13T23:49:17.495726162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:49:17.820522 containerd[1676]: time="2026-01-13T23:49:17.820460199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:17.823436 containerd[1676]: time="2026-01-13T23:49:17.823336613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:49:17.823491 containerd[1676]: time="2026-01-13T23:49:17.823439774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:17.823796 kubelet[2897]: E0113 23:49:17.823759 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:49:17.823859 kubelet[2897]: E0113 23:49:17.823808 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:49:17.824288 kubelet[2897]: E0113 23:49:17.823918 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:17.826447 containerd[1676]: time="2026-01-13T23:49:17.826419708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:49:17.837309 systemd-networkd[1586]: caliec082d2ba73: Gained IPv6LL Jan 13 23:49:17.966108 systemd-networkd[1586]: calia46258b2da0: Gained IPv6LL Jan 13 23:49:18.152812 containerd[1676]: time="2026-01-13T23:49:18.152692513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:18.154480 containerd[1676]: time="2026-01-13T23:49:18.154346201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:49:18.154480 containerd[1676]: time="2026-01-13T23:49:18.154412641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:18.154743 kubelet[2897]: E0113 23:49:18.154704 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:49:18.154794 kubelet[2897]: E0113 23:49:18.154753 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:49:18.154916 kubelet[2897]: E0113 23:49:18.154876 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:18.156286 kubelet[2897]: E0113 23:49:18.156237 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:18.158107 systemd-networkd[1586]: cali4bd3d9b54b0: Gained IPv6LL Jan 13 23:49:18.267065 containerd[1676]: time="2026-01-13T23:49:18.266935955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kd8rx,Uid:1568cf69-d227-4d6c-8d10-61ba58db7902,Namespace:calico-system,Attempt:0,}" Jan 13 23:49:18.365250 systemd-networkd[1586]: calie77c9a9e474: Link UP Jan 13 23:49:18.365990 systemd-networkd[1586]: calie77c9a9e474: Gained carrier Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.303 [INFO][4936] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0 goldmane-666569f655- calico-system 1568cf69-d227-4d6c-8d10-61ba58db7902 797 0 2026-01-13 23:48:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4578-0-0-p-89582bef9b goldmane-666569f655-kd8rx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie77c9a9e474 [] [] }} ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.303 [INFO][4936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.325 [INFO][4951] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" HandleID="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Workload="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.325 [INFO][4951] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" HandleID="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Workload="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a36d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4578-0-0-p-89582bef9b", "pod":"goldmane-666569f655-kd8rx", "timestamp":"2026-01-13 23:49:18.325093521 +0000 UTC"}, Hostname:"ci-4578-0-0-p-89582bef9b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.325 [INFO][4951] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.325 [INFO][4951] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.325 [INFO][4951] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4578-0-0-p-89582bef9b' Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.334 [INFO][4951] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.338 [INFO][4951] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.342 [INFO][4951] ipam/ipam.go 511: Trying affinity for 192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.344 [INFO][4951] ipam/ipam.go 158: Attempting to load block cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.346 [INFO][4951] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.86.128/26 host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.346 [INFO][4951] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.86.128/26 handle="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.348 [INFO][4951] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0 Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.354 [INFO][4951] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.86.128/26 handle="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.361 [INFO][4951] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.86.136/26] block=192.168.86.128/26 handle="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.361 [INFO][4951] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.86.136/26] handle="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" host="ci-4578-0-0-p-89582bef9b" Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.361 [INFO][4951] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 13 23:49:18.382719 containerd[1676]: 2026-01-13 23:49:18.361 [INFO][4951] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.86.136/26] IPv6=[] ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" HandleID="k8s-pod-network.28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Workload="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.362 [INFO][4936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1568cf69-d227-4d6c-8d10-61ba58db7902", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"", Pod:"goldmane-666569f655-kd8rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77c9a9e474", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.362 [INFO][4936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.86.136/32] ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.362 [INFO][4936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie77c9a9e474 ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.366 [INFO][4936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.366 [INFO][4936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"1568cf69-d227-4d6c-8d10-61ba58db7902", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2026, time.January, 13, 23, 48, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4578-0-0-p-89582bef9b", ContainerID:"28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0", Pod:"goldmane-666569f655-kd8rx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.86.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie77c9a9e474", MAC:"5a:70:8c:4d:3e:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 13 23:49:18.383396 containerd[1676]: 2026-01-13 23:49:18.379 [INFO][4936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" Namespace="calico-system" Pod="goldmane-666569f655-kd8rx" WorkloadEndpoint="ci--4578--0--0--p--89582bef9b-k8s-goldmane--666569f655--kd8rx-eth0" Jan 13 23:49:18.397000 audit[4968]: NETFILTER_CFG table=filter:139 family=2 entries=74 op=nft_register_chain pid=4968 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 13 23:49:18.397000 audit[4968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35160 a0=3 a1=ffffeb97cb40 a2=0 a3=ffffa7e1ffa8 items=0 ppid=4166 pid=4968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.397000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 13 23:49:18.410167 containerd[1676]: time="2026-01-13T23:49:18.409893418Z" level=info msg="connecting to shim 28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0" address="unix:///run/containerd/s/9622e1ce3c77f9cd3b0a66cab10dd2b2611e8730bd3286690ee4d51f264ed4ef" namespace=k8s.io protocol=ttrpc version=3 Jan 13 23:49:18.414135 kubelet[2897]: E0113 23:49:18.413890 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:18.419689 kubelet[2897]: E0113 23:49:18.416109 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:18.453826 systemd[1]: Started cri-containerd-28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0.scope - libcontainer container 28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0. Jan 13 23:49:18.465000 audit: BPF prog-id=251 op=LOAD Jan 13 23:49:18.467369 kernel: kauditd_printk_skb: 205 callbacks suppressed Jan 13 23:49:18.467452 kernel: audit: type=1334 audit(1768348158.465:736): prog-id=251 op=LOAD Jan 13 23:49:18.466000 audit: BPF prog-id=252 op=LOAD Jan 13 23:49:18.466000 audit[4990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.471546 kernel: audit: type=1334 audit(1768348158.466:737): prog-id=252 op=LOAD Jan 13 23:49:18.471764 kernel: audit: type=1300 audit(1768348158.466:737): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.471849 kernel: audit: type=1327 audit(1768348158.466:737): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.466000 audit: BPF prog-id=252 op=UNLOAD Jan 13 23:49:18.466000 audit[4990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.478694 kernel: audit: type=1334 audit(1768348158.466:738): prog-id=252 op=UNLOAD Jan 13 23:49:18.478742 kernel: audit: type=1300 audit(1768348158.466:738): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.478773 kernel: audit: type=1327 audit(1768348158.466:738): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.481583 kernel: audit: type=1334 audit(1768348158.466:739): prog-id=253 op=LOAD Jan 13 23:49:18.466000 audit: BPF prog-id=253 op=LOAD Jan 13 23:49:18.482208 kernel: audit: type=1300 audit(1768348158.466:739): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.466000 audit[4990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.466000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.487747 kernel: audit: type=1327 audit(1768348158.466:739): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.467000 audit: BPF prog-id=254 op=LOAD Jan 13 23:49:18.467000 audit[4990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.467000 audit: BPF prog-id=254 op=UNLOAD Jan 13 23:49:18.467000 audit[4990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.467000 audit: BPF prog-id=253 op=UNLOAD Jan 13 23:49:18.467000 audit[4990]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.467000 audit: BPF prog-id=255 op=LOAD Jan 13 23:49:18.467000 audit[4990]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4977 pid=4990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.467000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238626464353434383430313266343666666230393232343739663862 Jan 13 23:49:18.499600 containerd[1676]: time="2026-01-13T23:49:18.499547419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-kd8rx,Uid:1568cf69-d227-4d6c-8d10-61ba58db7902,Namespace:calico-system,Attempt:0,} returns sandbox id \"28bdd54484012f46ffb0922479f8bb59abc2c99dae9d0818035773855e8c3fc0\"" Jan 13 23:49:18.502074 containerd[1676]: time="2026-01-13T23:49:18.501970991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:49:18.509000 audit[5016]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:18.509000 audit[5016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffee74c230 a2=0 a3=1 items=0 ppid=3065 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:18.520000 audit[5016]: NETFILTER_CFG table=nat:141 family=2 entries=20 op=nft_register_rule pid=5016 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:18.520000 audit[5016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffee74c230 a2=0 a3=1 items=0 ppid=3065 pid=5016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:18.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:18.605272 systemd-networkd[1586]: cali83418ab6037: Gained IPv6LL Jan 13 23:49:18.839953 containerd[1676]: time="2026-01-13T23:49:18.839840212Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:18.841575 containerd[1676]: time="2026-01-13T23:49:18.841510420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:49:18.841655 containerd[1676]: time="2026-01-13T23:49:18.841598781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:18.841821 kubelet[2897]: E0113 23:49:18.841763 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:18.841821 kubelet[2897]: E0113 23:49:18.841819 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:18.842020 kubelet[2897]: E0113 23:49:18.841939 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85ncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:18.843180 kubelet[2897]: E0113 23:49:18.843124 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:19.418154 kubelet[2897]: E0113 23:49:19.418102 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:19.418974 kubelet[2897]: E0113 23:49:19.418901 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:19.455000 audit[5018]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:19.455000 audit[5018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdae83c70 a2=0 a3=1 items=0 ppid=3065 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:19.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:19.463000 audit[5018]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5018 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:49:19.463000 audit[5018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdae83c70 a2=0 a3=1 items=0 ppid=3065 pid=5018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:49:19.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:49:20.397151 systemd-networkd[1586]: calie77c9a9e474: Gained IPv6LL Jan 13 23:49:20.419300 kubelet[2897]: E0113 23:49:20.419256 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:21.260029 containerd[1676]: time="2026-01-13T23:49:21.259586751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:49:21.591981 containerd[1676]: time="2026-01-13T23:49:21.591726785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:21.593949 containerd[1676]: time="2026-01-13T23:49:21.593910715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:49:21.594031 containerd[1676]: time="2026-01-13T23:49:21.593965276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:21.594201 kubelet[2897]: E0113 23:49:21.594116 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:21.594201 kubelet[2897]: E0113 23:49:21.594164 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:21.594753 kubelet[2897]: E0113 23:49:21.594255 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb5fe9369c744a3f8b622ec6bf599501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:21.596231 containerd[1676]: time="2026-01-13T23:49:21.596161126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:49:21.934021 containerd[1676]: time="2026-01-13T23:49:21.933791507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:21.936371 containerd[1676]: time="2026-01-13T23:49:21.936279119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:49:21.936506 containerd[1676]: time="2026-01-13T23:49:21.936364919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:21.936555 kubelet[2897]: E0113 23:49:21.936506 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:21.936604 kubelet[2897]: E0113 23:49:21.936565 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:21.936729 kubelet[2897]: E0113 23:49:21.936665 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:21.937947 kubelet[2897]: E0113 23:49:21.937871 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:49:28.258787 containerd[1676]: time="2026-01-13T23:49:28.258749170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:28.784840 containerd[1676]: time="2026-01-13T23:49:28.784776077Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:28.786755 containerd[1676]: time="2026-01-13T23:49:28.786689966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:28.786812 containerd[1676]: time="2026-01-13T23:49:28.786760807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:28.787049 kubelet[2897]: E0113 23:49:28.786945 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:28.787049 kubelet[2897]: E0113 23:49:28.787022 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:28.787761 kubelet[2897]: E0113 23:49:28.787695 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzspl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:28.789209 kubelet[2897]: E0113 23:49:28.789155 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:31.259019 containerd[1676]: time="2026-01-13T23:49:31.258950004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:49:31.593417 containerd[1676]: time="2026-01-13T23:49:31.593221848Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:31.594970 containerd[1676]: time="2026-01-13T23:49:31.594903536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:49:31.595018 containerd[1676]: time="2026-01-13T23:49:31.594976096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:31.595251 kubelet[2897]: E0113 23:49:31.595182 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:31.595251 kubelet[2897]: E0113 23:49:31.595239 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:31.595607 kubelet[2897]: E0113 23:49:31.595357 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:31.596597 kubelet[2897]: E0113 23:49:31.596550 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:33.259830 containerd[1676]: time="2026-01-13T23:49:33.259600842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:33.583228 containerd[1676]: time="2026-01-13T23:49:33.582973713Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:33.584639 containerd[1676]: time="2026-01-13T23:49:33.584543280Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:33.584705 containerd[1676]: time="2026-01-13T23:49:33.584617441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:33.585483 kubelet[2897]: E0113 23:49:33.584887 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:33.585483 kubelet[2897]: E0113 23:49:33.584969 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:33.585483 kubelet[2897]: E0113 23:49:33.585203 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8svz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:33.586036 containerd[1676]: time="2026-01-13T23:49:33.585274604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:49:33.586325 kubelet[2897]: E0113 23:49:33.586295 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:33.928115 containerd[1676]: time="2026-01-13T23:49:33.927998249Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:33.929731 containerd[1676]: time="2026-01-13T23:49:33.929611017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:49:33.929731 containerd[1676]: time="2026-01-13T23:49:33.929683858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:33.929902 kubelet[2897]: E0113 23:49:33.929863 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:49:33.929987 kubelet[2897]: E0113 23:49:33.929911 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:49:33.930434 kubelet[2897]: E0113 23:49:33.930391 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:33.932514 containerd[1676]: time="2026-01-13T23:49:33.932483751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:49:34.249094 containerd[1676]: time="2026-01-13T23:49:34.249033388Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:34.250515 containerd[1676]: time="2026-01-13T23:49:34.250449995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:49:34.250595 containerd[1676]: time="2026-01-13T23:49:34.250463395Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:34.250750 kubelet[2897]: E0113 23:49:34.250709 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:49:34.250820 kubelet[2897]: E0113 23:49:34.250763 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:49:34.250903 kubelet[2897]: E0113 23:49:34.250866 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:34.252229 kubelet[2897]: E0113 23:49:34.252169 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:34.259741 kubelet[2897]: E0113 23:49:34.259668 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:49:35.259234 containerd[1676]: time="2026-01-13T23:49:35.258697913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:49:35.593291 containerd[1676]: time="2026-01-13T23:49:35.593123118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:35.594777 containerd[1676]: time="2026-01-13T23:49:35.594726125Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:49:35.594843 containerd[1676]: time="2026-01-13T23:49:35.594807726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:35.595065 kubelet[2897]: E0113 23:49:35.594983 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:35.595065 kubelet[2897]: E0113 23:49:35.595037 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:49:35.595577 kubelet[2897]: E0113 23:49:35.595159 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85ncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:35.596395 kubelet[2897]: E0113 23:49:35.596357 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:40.258702 kubelet[2897]: E0113 23:49:40.258583 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:44.258640 kubelet[2897]: E0113 23:49:44.258595 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:49:47.263118 kubelet[2897]: E0113 23:49:47.262632 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:49:48.258778 kubelet[2897]: E0113 23:49:48.258696 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:49:48.259157 containerd[1676]: time="2026-01-13T23:49:48.259105123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:49:48.261294 kubelet[2897]: E0113 23:49:48.261141 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:49:48.594303 containerd[1676]: time="2026-01-13T23:49:48.594052371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:48.595592 containerd[1676]: time="2026-01-13T23:49:48.595496698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:49:48.595691 containerd[1676]: time="2026-01-13T23:49:48.595583498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:48.595804 kubelet[2897]: E0113 23:49:48.595747 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:48.595804 kubelet[2897]: E0113 23:49:48.595791 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:49:48.596249 kubelet[2897]: E0113 23:49:48.595885 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb5fe9369c744a3f8b622ec6bf599501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:48.598303 containerd[1676]: time="2026-01-13T23:49:48.598239111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:49:48.935024 containerd[1676]: time="2026-01-13T23:49:48.934745486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:48.936760 containerd[1676]: time="2026-01-13T23:49:48.936653735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:49:48.936760 containerd[1676]: time="2026-01-13T23:49:48.936731136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:48.936899 kubelet[2897]: E0113 23:49:48.936860 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:48.936978 kubelet[2897]: E0113 23:49:48.936918 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:49:48.937110 kubelet[2897]: E0113 23:49:48.937050 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:48.938320 kubelet[2897]: E0113 23:49:48.938220 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:49:51.260270 containerd[1676]: time="2026-01-13T23:49:51.259324157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:49:51.592616 containerd[1676]: time="2026-01-13T23:49:51.592461675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:51.594160 containerd[1676]: time="2026-01-13T23:49:51.594072923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:49:51.594263 containerd[1676]: time="2026-01-13T23:49:51.594166684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:51.594485 kubelet[2897]: E0113 23:49:51.594353 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:51.594485 kubelet[2897]: E0113 23:49:51.594469 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:49:51.594897 kubelet[2897]: E0113 23:49:51.594702 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzspl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:51.595894 kubelet[2897]: E0113 23:49:51.595835 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:49:58.259478 containerd[1676]: time="2026-01-13T23:49:58.259423701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:49:58.599311 containerd[1676]: time="2026-01-13T23:49:58.598947130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:49:58.600708 containerd[1676]: time="2026-01-13T23:49:58.600665739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:49:58.600792 containerd[1676]: time="2026-01-13T23:49:58.600755459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:49:58.601034 kubelet[2897]: E0113 23:49:58.600986 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:58.601304 kubelet[2897]: E0113 23:49:58.601046 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:49:58.601304 kubelet[2897]: E0113 23:49:58.601166 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:49:58.602479 kubelet[2897]: E0113 23:49:58.602426 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:50:01.259885 containerd[1676]: time="2026-01-13T23:50:01.259806135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:50:01.260637 kubelet[2897]: E0113 23:50:01.260337 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:50:01.611019 containerd[1676]: time="2026-01-13T23:50:01.609753016Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:01.611567 containerd[1676]: time="2026-01-13T23:50:01.611525225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:50:01.611768 containerd[1676]: time="2026-01-13T23:50:01.611599265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:01.611945 kubelet[2897]: E0113 23:50:01.611905 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:50:01.612029 kubelet[2897]: E0113 23:50:01.611994 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:50:01.612385 containerd[1676]: time="2026-01-13T23:50:01.612323709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:50:01.612548 kubelet[2897]: E0113 23:50:01.612454 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85ncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:01.613916 kubelet[2897]: E0113 23:50:01.613726 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:50:02.135608 containerd[1676]: time="2026-01-13T23:50:02.135550202Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:02.137705 containerd[1676]: time="2026-01-13T23:50:02.137658292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:50:02.137705 containerd[1676]: time="2026-01-13T23:50:02.137726372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:02.137876 kubelet[2897]: E0113 23:50:02.137845 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:50:02.137926 kubelet[2897]: E0113 23:50:02.137888 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:50:02.138102 kubelet[2897]: E0113 23:50:02.138018 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:02.141153 containerd[1676]: time="2026-01-13T23:50:02.141127269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:50:02.485384 containerd[1676]: time="2026-01-13T23:50:02.485281562Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:02.487027 containerd[1676]: time="2026-01-13T23:50:02.486975730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:50:02.487153 containerd[1676]: time="2026-01-13T23:50:02.486986330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:02.487282 kubelet[2897]: E0113 23:50:02.487216 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:50:02.487282 kubelet[2897]: E0113 23:50:02.487272 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:50:02.487550 kubelet[2897]: E0113 23:50:02.487472 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:02.487735 containerd[1676]: time="2026-01-13T23:50:02.487674653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:50:02.489056 kubelet[2897]: E0113 23:50:02.489001 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:50:02.821628 containerd[1676]: time="2026-01-13T23:50:02.821493135Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:02.825128 containerd[1676]: time="2026-01-13T23:50:02.825067912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:50:02.825253 containerd[1676]: time="2026-01-13T23:50:02.825132593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:02.825333 kubelet[2897]: E0113 23:50:02.825287 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:02.825392 kubelet[2897]: E0113 23:50:02.825339 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:02.825511 kubelet[2897]: E0113 23:50:02.825464 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8svz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:02.827152 kubelet[2897]: E0113 23:50:02.827087 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:50:03.262415 kubelet[2897]: E0113 23:50:03.262350 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:50:14.258636 kubelet[2897]: E0113 23:50:14.258581 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:50:15.259829 kubelet[2897]: E0113 23:50:15.259467 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:50:15.261143 kubelet[2897]: E0113 23:50:15.260995 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:50:16.258847 kubelet[2897]: E0113 23:50:16.258723 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:50:16.259593 kubelet[2897]: E0113 23:50:16.259276 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:50:17.259475 kubelet[2897]: E0113 23:50:17.259432 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:50:27.259399 kubelet[2897]: E0113 23:50:27.259029 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:50:28.259211 kubelet[2897]: E0113 23:50:28.258884 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:50:28.259418 kubelet[2897]: E0113 23:50:28.259244 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:50:29.260583 kubelet[2897]: E0113 23:50:29.260521 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:50:30.258839 kubelet[2897]: E0113 23:50:30.258787 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:50:31.258968 containerd[1676]: time="2026-01-13T23:50:31.258898098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:50:31.582650 containerd[1676]: time="2026-01-13T23:50:31.582354248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:31.584442 containerd[1676]: time="2026-01-13T23:50:31.584401378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:50:31.584526 containerd[1676]: time="2026-01-13T23:50:31.584412338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:31.585120 kubelet[2897]: E0113 23:50:31.585076 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:50:31.585408 kubelet[2897]: E0113 23:50:31.585131 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:50:31.585408 kubelet[2897]: E0113 23:50:31.585241 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb5fe9369c744a3f8b622ec6bf599501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:31.587887 containerd[1676]: time="2026-01-13T23:50:31.587817235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:50:31.908924 containerd[1676]: time="2026-01-13T23:50:31.908778533Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:31.910753 containerd[1676]: time="2026-01-13T23:50:31.910702503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:50:31.910841 containerd[1676]: time="2026-01-13T23:50:31.910797823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:31.911049 kubelet[2897]: E0113 23:50:31.911005 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:50:31.911119 kubelet[2897]: E0113 23:50:31.911073 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:50:31.911668 kubelet[2897]: E0113 23:50:31.911236 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:31.912433 kubelet[2897]: E0113 23:50:31.912391 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:50:39.258625 kubelet[2897]: E0113 23:50:39.258558 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:50:39.259156 containerd[1676]: time="2026-01-13T23:50:39.258839838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:50:39.584600 containerd[1676]: time="2026-01-13T23:50:39.584467479Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:39.585702 containerd[1676]: time="2026-01-13T23:50:39.585649965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:50:39.585809 containerd[1676]: time="2026-01-13T23:50:39.585741485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:39.586131 kubelet[2897]: E0113 23:50:39.586068 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:50:39.586131 kubelet[2897]: E0113 23:50:39.586123 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:50:39.586304 kubelet[2897]: E0113 23:50:39.586240 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:39.587749 kubelet[2897]: E0113 23:50:39.587642 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:50:39.885003 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 13 23:50:39.885121 kernel: audit: type=1130 audit(1768348239.882:748): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.15.225:22-4.153.228.146:38610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:39.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.15.225:22-4.153.228.146:38610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:39.883283 systemd[1]: Started sshd@9-10.0.15.225:22-4.153.228.146:38610.service - OpenSSH per-connection server daemon (4.153.228.146:38610). Jan 13 23:50:40.258186 kubelet[2897]: E0113 23:50:40.258127 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:50:40.434000 audit[5157]: USER_ACCT pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.436074 sshd[5157]: Accepted publickey for core from 4.153.228.146 port 38610 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:50:40.437000 audit[5157]: CRED_ACQ pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.439241 sshd-session[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:50:40.440989 kernel: audit: type=1101 audit(1768348240.434:749): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.441061 kernel: audit: type=1103 audit(1768348240.437:750): pid=5157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.441082 kernel: audit: type=1006 audit(1768348240.437:751): pid=5157 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 13 23:50:40.437000 audit[5157]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc64f76a0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:40.445857 kernel: audit: type=1300 audit(1768348240.437:751): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc64f76a0 a2=3 a3=0 items=0 ppid=1 pid=5157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:40.437000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:40.447568 kernel: audit: type=1327 audit(1768348240.437:751): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:40.449015 systemd-logind[1653]: New session 11 of user core. Jan 13 23:50:40.457203 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 23:50:40.458000 audit[5157]: USER_START pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.460000 audit[5161]: CRED_ACQ pid=5161 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.465797 kernel: audit: type=1105 audit(1768348240.458:752): pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.465864 kernel: audit: type=1103 audit(1768348240.460:753): pid=5161 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.814081 sshd[5161]: Connection closed by 4.153.228.146 port 38610 Jan 13 23:50:40.814129 sshd-session[5157]: pam_unix(sshd:session): session closed for user core Jan 13 23:50:40.815000 audit[5157]: USER_END pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.819239 systemd[1]: sshd@9-10.0.15.225:22-4.153.228.146:38610.service: Deactivated successfully. Jan 13 23:50:40.815000 audit[5157]: CRED_DISP pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.821107 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 23:50:40.822654 kernel: audit: type=1106 audit(1768348240.815:754): pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.822724 kernel: audit: type=1104 audit(1768348240.815:755): pid=5157 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:40.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.15.225:22-4.153.228.146:38610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:40.823372 systemd-logind[1653]: Session 11 logged out. Waiting for processes to exit. Jan 13 23:50:40.824773 systemd-logind[1653]: Removed session 11. Jan 13 23:50:42.258845 containerd[1676]: time="2026-01-13T23:50:42.258772230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:50:42.586863 containerd[1676]: time="2026-01-13T23:50:42.586519762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:42.588443 containerd[1676]: time="2026-01-13T23:50:42.588339291Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:50:42.589998 kubelet[2897]: E0113 23:50:42.589084 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:50:42.589998 kubelet[2897]: E0113 23:50:42.589133 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:50:42.589998 kubelet[2897]: E0113 23:50:42.589273 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85ncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:42.590440 containerd[1676]: time="2026-01-13T23:50:42.589494697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:42.590589 kubelet[2897]: E0113 23:50:42.590542 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:50:45.259183 containerd[1676]: time="2026-01-13T23:50:45.259004104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:50:45.592736 containerd[1676]: time="2026-01-13T23:50:45.592601345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:45.594282 containerd[1676]: time="2026-01-13T23:50:45.594217873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:50:45.594381 containerd[1676]: time="2026-01-13T23:50:45.594323193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:45.594544 kubelet[2897]: E0113 23:50:45.594492 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:45.594927 kubelet[2897]: E0113 23:50:45.594542 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:45.594927 kubelet[2897]: E0113 23:50:45.594657 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzspl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:45.595916 kubelet[2897]: E0113 23:50:45.595871 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:50:45.924621 systemd[1]: Started sshd@10-10.0.15.225:22-4.153.228.146:57092.service - OpenSSH per-connection server daemon (4.153.228.146:57092). Jan 13 23:50:45.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.15.225:22-4.153.228.146:57092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:45.928025 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:50:45.928187 kernel: audit: type=1130 audit(1768348245.923:757): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.15.225:22-4.153.228.146:57092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:46.259686 kubelet[2897]: E0113 23:50:46.259626 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:50:46.466000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.467508 sshd[5199]: Accepted publickey for core from 4.153.228.146 port 57092 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:50:46.469853 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:50:46.468000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.472627 kernel: audit: type=1101 audit(1768348246.466:758): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.472684 kernel: audit: type=1103 audit(1768348246.468:759): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.472703 kernel: audit: type=1006 audit(1768348246.468:760): pid=5199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 13 23:50:46.474104 kernel: audit: type=1300 audit(1768348246.468:760): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde2ec5a0 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:46.468000 audit[5199]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffde2ec5a0 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:46.476418 systemd-logind[1653]: New session 12 of user core. Jan 13 23:50:46.468000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:46.478028 kernel: audit: type=1327 audit(1768348246.468:760): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:46.490392 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 23:50:46.493000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.495000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.500155 kernel: audit: type=1105 audit(1768348246.493:761): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.500310 kernel: audit: type=1103 audit(1768348246.495:762): pid=5210 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.847976 sshd[5210]: Connection closed by 4.153.228.146 port 57092 Jan 13 23:50:46.848439 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 13 23:50:46.849000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.853794 systemd-logind[1653]: Session 12 logged out. Waiting for processes to exit. Jan 13 23:50:46.853951 systemd[1]: sshd@10-10.0.15.225:22-4.153.228.146:57092.service: Deactivated successfully. Jan 13 23:50:46.849000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.856762 kernel: audit: type=1106 audit(1768348246.849:763): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.856830 kernel: audit: type=1104 audit(1768348246.849:764): pid=5199 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:46.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.15.225:22-4.153.228.146:57092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:46.855857 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 23:50:46.857837 systemd-logind[1653]: Removed session 12. Jan 13 23:50:51.261121 containerd[1676]: time="2026-01-13T23:50:51.261077580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:50:51.590005 containerd[1676]: time="2026-01-13T23:50:51.589653875Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:51.591258 containerd[1676]: time="2026-01-13T23:50:51.591211403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:50:51.591352 containerd[1676]: time="2026-01-13T23:50:51.591287203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:51.591477 kubelet[2897]: E0113 23:50:51.591438 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:50:51.591741 kubelet[2897]: E0113 23:50:51.591504 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:50:51.591768 kubelet[2897]: E0113 23:50:51.591712 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:51.593841 containerd[1676]: time="2026-01-13T23:50:51.593817856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:50:51.925063 containerd[1676]: time="2026-01-13T23:50:51.923455517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:51.926855 containerd[1676]: time="2026-01-13T23:50:51.926711093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:50:51.926855 containerd[1676]: time="2026-01-13T23:50:51.926743493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:51.927074 kubelet[2897]: E0113 23:50:51.927031 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:50:51.927123 kubelet[2897]: E0113 23:50:51.927083 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:50:51.927231 kubelet[2897]: E0113 23:50:51.927190 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:51.928586 kubelet[2897]: E0113 23:50:51.928527 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:50:51.958455 systemd[1]: Started sshd@11-10.0.15.225:22-4.153.228.146:57102.service - OpenSSH per-connection server daemon (4.153.228.146:57102). Jan 13 23:50:51.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.15.225:22-4.153.228.146:57102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:51.959524 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:50:51.959580 kernel: audit: type=1130 audit(1768348251.957:766): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.15.225:22-4.153.228.146:57102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:52.488000 audit[5225]: USER_ACCT pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.490103 sshd[5225]: Accepted publickey for core from 4.153.228.146 port 57102 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:50:52.495780 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:50:52.492000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.498485 kernel: audit: type=1101 audit(1768348252.488:767): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.498574 kernel: audit: type=1103 audit(1768348252.492:768): pid=5225 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.501454 kernel: audit: type=1006 audit(1768348252.494:769): pid=5225 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 13 23:50:52.501544 kernel: audit: type=1300 audit(1768348252.494:769): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6a58930 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:52.494000 audit[5225]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff6a58930 a2=3 a3=0 items=0 ppid=1 pid=5225 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:52.502763 systemd-logind[1653]: New session 13 of user core. Jan 13 23:50:52.494000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:52.507873 kernel: audit: type=1327 audit(1768348252.494:769): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:52.508198 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 23:50:52.510000 audit[5225]: USER_START pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.511000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.516698 kernel: audit: type=1105 audit(1768348252.510:770): pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.516848 kernel: audit: type=1103 audit(1768348252.511:771): pid=5229 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.850259 sshd[5229]: Connection closed by 4.153.228.146 port 57102 Jan 13 23:50:52.850763 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Jan 13 23:50:52.854000 audit[5225]: USER_END pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.857800 systemd-logind[1653]: Session 13 logged out. Waiting for processes to exit. Jan 13 23:50:52.857936 systemd[1]: sshd@11-10.0.15.225:22-4.153.228.146:57102.service: Deactivated successfully. Jan 13 23:50:52.854000 audit[5225]: CRED_DISP pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.859999 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 23:50:52.861341 kernel: audit: type=1106 audit(1768348252.854:772): pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.861403 kernel: audit: type=1104 audit(1768348252.854:773): pid=5225 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:52.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.15.225:22-4.153.228.146:57102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:52.862276 systemd-logind[1653]: Removed session 13. Jan 13 23:50:52.962658 systemd[1]: Started sshd@12-10.0.15.225:22-4.153.228.146:57106.service - OpenSSH per-connection server daemon (4.153.228.146:57106). Jan 13 23:50:52.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.15.225:22-4.153.228.146:57106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:53.513000 audit[5244]: USER_ACCT pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.514772 sshd[5244]: Accepted publickey for core from 4.153.228.146 port 57106 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:50:53.514000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.514000 audit[5244]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2c34590 a2=3 a3=0 items=0 ppid=1 pid=5244 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:53.514000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:53.516657 sshd-session[5244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:50:53.523013 systemd-logind[1653]: New session 14 of user core. Jan 13 23:50:53.544402 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 23:50:53.546000 audit[5244]: USER_START pid=5244 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.547000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.911271 sshd[5248]: Connection closed by 4.153.228.146 port 57106 Jan 13 23:50:53.911439 sshd-session[5244]: pam_unix(sshd:session): session closed for user core Jan 13 23:50:53.912000 audit[5244]: USER_END pid=5244 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.912000 audit[5244]: CRED_DISP pid=5244 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:53.916642 systemd[1]: sshd@12-10.0.15.225:22-4.153.228.146:57106.service: Deactivated successfully. Jan 13 23:50:53.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.15.225:22-4.153.228.146:57106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:53.919917 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 23:50:53.921452 systemd-logind[1653]: Session 14 logged out. Waiting for processes to exit. Jan 13 23:50:53.922857 systemd-logind[1653]: Removed session 14. Jan 13 23:50:54.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.15.225:22-4.153.228.146:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:54.023445 systemd[1]: Started sshd@13-10.0.15.225:22-4.153.228.146:57110.service - OpenSSH per-connection server daemon (4.153.228.146:57110). Jan 13 23:50:54.260545 containerd[1676]: time="2026-01-13T23:50:54.259755566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:50:54.260922 kubelet[2897]: E0113 23:50:54.260173 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:50:54.260922 kubelet[2897]: E0113 23:50:54.260471 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:50:54.556000 audit[5260]: USER_ACCT pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.558255 sshd[5260]: Accepted publickey for core from 4.153.228.146 port 57110 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:50:54.557000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.558000 audit[5260]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffccfcbd0 a2=3 a3=0 items=0 ppid=1 pid=5260 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:50:54.558000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:50:54.560459 sshd-session[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:50:54.567283 systemd-logind[1653]: New session 15 of user core. Jan 13 23:50:54.580195 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 23:50:54.583000 audit[5260]: USER_START pid=5260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.587000 audit[5264]: CRED_ACQ pid=5264 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.601569 containerd[1676]: time="2026-01-13T23:50:54.601511966Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:50:54.602728 containerd[1676]: time="2026-01-13T23:50:54.602668612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:50:54.602800 containerd[1676]: time="2026-01-13T23:50:54.602750813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:50:54.605211 kubelet[2897]: E0113 23:50:54.605158 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:54.605339 kubelet[2897]: E0113 23:50:54.605213 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:50:54.605416 kubelet[2897]: E0113 23:50:54.605360 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8svz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:50:54.606606 kubelet[2897]: E0113 23:50:54.606553 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:50:54.925148 sshd[5264]: Connection closed by 4.153.228.146 port 57110 Jan 13 23:50:54.924587 sshd-session[5260]: pam_unix(sshd:session): session closed for user core Jan 13 23:50:54.925000 audit[5260]: USER_END pid=5260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.926000 audit[5260]: CRED_DISP pid=5260 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:50:54.931669 systemd[1]: sshd@13-10.0.15.225:22-4.153.228.146:57110.service: Deactivated successfully. Jan 13 23:50:54.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.15.225:22-4.153.228.146:57110 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:50:54.933498 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 23:50:54.934326 systemd-logind[1653]: Session 15 logged out. Waiting for processes to exit. Jan 13 23:50:54.935538 systemd-logind[1653]: Removed session 15. Jan 13 23:50:58.258163 kubelet[2897]: E0113 23:50:58.258121 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:51:00.036276 systemd[1]: Started sshd@14-10.0.15.225:22-4.153.228.146:49888.service - OpenSSH per-connection server daemon (4.153.228.146:49888). Jan 13 23:51:00.037171 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 13 23:51:00.037199 kernel: audit: type=1130 audit(1768348260.035:793): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.15.225:22-4.153.228.146:49888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:00.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.15.225:22-4.153.228.146:49888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:00.584000 audit[5281]: USER_ACCT pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.585721 sshd[5281]: Accepted publickey for core from 4.153.228.146 port 49888 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:00.587299 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:00.585000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.593693 kernel: audit: type=1101 audit(1768348260.584:794): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.593786 kernel: audit: type=1103 audit(1768348260.585:795): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.597832 kernel: audit: type=1006 audit(1768348260.585:796): pid=5281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 13 23:51:00.597911 kernel: audit: type=1300 audit(1768348260.585:796): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe0bf0a0 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:00.585000 audit[5281]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe0bf0a0 a2=3 a3=0 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:00.599600 systemd-logind[1653]: New session 16 of user core. Jan 13 23:51:00.585000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:00.603588 kernel: audit: type=1327 audit(1768348260.585:796): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:00.606170 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 23:51:00.608000 audit[5281]: USER_START pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.611000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.617391 kernel: audit: type=1105 audit(1768348260.608:797): pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.617470 kernel: audit: type=1103 audit(1768348260.611:798): pid=5285 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.965995 sshd[5285]: Connection closed by 4.153.228.146 port 49888 Jan 13 23:51:00.967229 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:00.967000 audit[5281]: USER_END pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.972141 systemd[1]: sshd@14-10.0.15.225:22-4.153.228.146:49888.service: Deactivated successfully. Jan 13 23:51:00.968000 audit[5281]: CRED_DISP pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.974194 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 23:51:00.975019 kernel: audit: type=1106 audit(1768348260.967:799): pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.975102 kernel: audit: type=1104 audit(1768348260.968:800): pid=5281 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:00.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.15.225:22-4.153.228.146:49888 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:00.976153 systemd-logind[1653]: Session 16 logged out. Waiting for processes to exit. Jan 13 23:51:00.977598 systemd-logind[1653]: Removed session 16. Jan 13 23:51:01.262003 kubelet[2897]: E0113 23:51:01.261909 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:51:02.258801 kubelet[2897]: E0113 23:51:02.258720 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:51:05.259811 kubelet[2897]: E0113 23:51:05.259219 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:51:06.077655 systemd[1]: Started sshd@15-10.0.15.225:22-4.153.228.146:34686.service - OpenSSH per-connection server daemon (4.153.228.146:34686). Jan 13 23:51:06.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.15.225:22-4.153.228.146:34686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:06.081266 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:06.081331 kernel: audit: type=1130 audit(1768348266.076:802): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.15.225:22-4.153.228.146:34686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:06.606000 audit[5300]: USER_ACCT pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.607833 sshd[5300]: Accepted publickey for core from 4.153.228.146 port 34686 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:06.611025 kernel: audit: type=1101 audit(1768348266.606:803): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.610000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.612503 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:06.616105 kernel: audit: type=1103 audit(1768348266.610:804): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.616174 kernel: audit: type=1006 audit(1768348266.610:805): pid=5300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 13 23:51:06.610000 audit[5300]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4c29a50 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:06.619518 kernel: audit: type=1300 audit(1768348266.610:805): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4c29a50 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:06.619576 kernel: audit: type=1327 audit(1768348266.610:805): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:06.610000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:06.620615 systemd-logind[1653]: New session 17 of user core. Jan 13 23:51:06.630236 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 23:51:06.632000 audit[5300]: USER_START pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.636000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.640303 kernel: audit: type=1105 audit(1768348266.632:806): pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.640389 kernel: audit: type=1103 audit(1768348266.636:807): pid=5304 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.979674 sshd[5304]: Connection closed by 4.153.228.146 port 34686 Jan 13 23:51:06.979884 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:06.980000 audit[5300]: USER_END pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.984715 systemd[1]: sshd@15-10.0.15.225:22-4.153.228.146:34686.service: Deactivated successfully. Jan 13 23:51:06.980000 audit[5300]: CRED_DISP pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.987816 kernel: audit: type=1106 audit(1768348266.980:808): pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.987871 kernel: audit: type=1104 audit(1768348266.980:809): pid=5300 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:06.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.15.225:22-4.153.228.146:34686 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:06.988005 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 23:51:06.989230 systemd-logind[1653]: Session 17 logged out. Waiting for processes to exit. Jan 13 23:51:06.991765 systemd-logind[1653]: Removed session 17. Jan 13 23:51:07.261238 kubelet[2897]: E0113 23:51:07.261189 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:51:08.258197 kubelet[2897]: E0113 23:51:08.258147 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:51:10.258451 kubelet[2897]: E0113 23:51:10.258032 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:51:12.094300 systemd[1]: Started sshd@16-10.0.15.225:22-4.153.228.146:34698.service - OpenSSH per-connection server daemon (4.153.228.146:34698). Jan 13 23:51:12.096015 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:12.096117 kernel: audit: type=1130 audit(1768348272.093:811): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.15.225:22-4.153.228.146:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:12.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.15.225:22-4.153.228.146:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:12.619000 audit[5341]: USER_ACCT pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.620803 sshd[5341]: Accepted publickey for core from 4.153.228.146 port 34698 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:12.622000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.623951 sshd-session[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:12.626113 kernel: audit: type=1101 audit(1768348272.619:812): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.626176 kernel: audit: type=1103 audit(1768348272.622:813): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.627964 kernel: audit: type=1006 audit(1768348272.622:814): pid=5341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 13 23:51:12.628162 kernel: audit: type=1300 audit(1768348272.622:814): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff76f5c0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:12.622000 audit[5341]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff76f5c0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:12.630391 systemd-logind[1653]: New session 18 of user core. Jan 13 23:51:12.630994 kernel: audit: type=1327 audit(1768348272.622:814): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:12.622000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:12.638363 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 23:51:12.639000 audit[5341]: USER_START pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.639000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.647339 kernel: audit: type=1105 audit(1768348272.639:815): pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.647434 kernel: audit: type=1103 audit(1768348272.639:816): pid=5345 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.979785 sshd[5345]: Connection closed by 4.153.228.146 port 34698 Jan 13 23:51:12.980703 sshd-session[5341]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:12.980000 audit[5341]: USER_END pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.984855 systemd[1]: sshd@16-10.0.15.225:22-4.153.228.146:34698.service: Deactivated successfully. Jan 13 23:51:12.981000 audit[5341]: CRED_DISP pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.986714 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 23:51:12.988190 kernel: audit: type=1106 audit(1768348272.980:817): pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.988224 kernel: audit: type=1104 audit(1768348272.981:818): pid=5341 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:12.983000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.15.225:22-4.153.228.146:34698 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:12.988712 systemd-logind[1653]: Session 18 logged out. Waiting for processes to exit. Jan 13 23:51:12.991515 systemd-logind[1653]: Removed session 18. Jan 13 23:51:14.259266 kubelet[2897]: E0113 23:51:14.259063 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:51:16.260612 kubelet[2897]: E0113 23:51:16.260464 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:51:18.098688 systemd[1]: Started sshd@17-10.0.15.225:22-4.153.228.146:59540.service - OpenSSH per-connection server daemon (4.153.228.146:59540). Jan 13 23:51:18.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.15.225:22-4.153.228.146:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:18.099626 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:18.099682 kernel: audit: type=1130 audit(1768348278.097:820): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.15.225:22-4.153.228.146:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:18.661000 audit[5358]: USER_ACCT pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.663210 sshd[5358]: Accepted publickey for core from 4.153.228.146 port 59540 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:18.666004 kernel: audit: type=1101 audit(1768348278.661:821): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.666087 kernel: audit: type=1103 audit(1768348278.664:822): pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.664000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.667127 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:18.671237 kernel: audit: type=1006 audit(1768348278.665:823): pid=5358 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 13 23:51:18.672016 kernel: audit: type=1300 audit(1768348278.665:823): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8e599b0 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:18.665000 audit[5358]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8e599b0 a2=3 a3=0 items=0 ppid=1 pid=5358 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:18.665000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:18.676479 kernel: audit: type=1327 audit(1768348278.665:823): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:18.680268 systemd-logind[1653]: New session 19 of user core. Jan 13 23:51:18.688168 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 23:51:18.690000 audit[5358]: USER_START pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.694000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.698760 kernel: audit: type=1105 audit(1768348278.690:824): pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:18.698859 kernel: audit: type=1103 audit(1768348278.694:825): pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.029839 sshd[5362]: Connection closed by 4.153.228.146 port 59540 Jan 13 23:51:19.030152 sshd-session[5358]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:19.031000 audit[5358]: USER_END pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.037250 systemd[1]: sshd@17-10.0.15.225:22-4.153.228.146:59540.service: Deactivated successfully. Jan 13 23:51:19.031000 audit[5358]: CRED_DISP pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.039024 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 23:51:19.040474 kernel: audit: type=1106 audit(1768348279.031:826): pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.040536 kernel: audit: type=1104 audit(1768348279.031:827): pid=5358 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.040936 systemd-logind[1653]: Session 19 logged out. Waiting for processes to exit. Jan 13 23:51:19.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.15.225:22-4.153.228.146:59540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:19.043622 systemd-logind[1653]: Removed session 19. Jan 13 23:51:19.146676 systemd[1]: Started sshd@18-10.0.15.225:22-4.153.228.146:59552.service - OpenSSH per-connection server daemon (4.153.228.146:59552). Jan 13 23:51:19.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.15.225:22-4.153.228.146:59552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:19.258680 kubelet[2897]: E0113 23:51:19.258625 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:51:19.694000 audit[5375]: USER_ACCT pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.695417 sshd[5375]: Accepted publickey for core from 4.153.228.146 port 59552 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:19.695000 audit[5375]: CRED_ACQ pid=5375 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.695000 audit[5375]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc15ee420 a2=3 a3=0 items=0 ppid=1 pid=5375 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:19.695000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:19.697243 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:19.702358 systemd-logind[1653]: New session 20 of user core. Jan 13 23:51:19.714212 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 13 23:51:19.716000 audit[5375]: USER_START pid=5375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:19.717000 audit[5379]: CRED_ACQ pid=5379 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.116337 sshd[5379]: Connection closed by 4.153.228.146 port 59552 Jan 13 23:51:20.116837 sshd-session[5375]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:20.117000 audit[5375]: USER_END pid=5375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.117000 audit[5375]: CRED_DISP pid=5375 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.121746 systemd[1]: sshd@18-10.0.15.225:22-4.153.228.146:59552.service: Deactivated successfully. Jan 13 23:51:20.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.15.225:22-4.153.228.146:59552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:20.123636 systemd[1]: session-20.scope: Deactivated successfully. Jan 13 23:51:20.125406 systemd-logind[1653]: Session 20 logged out. Waiting for processes to exit. Jan 13 23:51:20.126161 systemd-logind[1653]: Removed session 20. Jan 13 23:51:20.228014 systemd[1]: Started sshd@19-10.0.15.225:22-4.153.228.146:59554.service - OpenSSH per-connection server daemon (4.153.228.146:59554). Jan 13 23:51:20.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.15.225:22-4.153.228.146:59554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:20.259322 kubelet[2897]: E0113 23:51:20.259239 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:51:20.768000 audit[5391]: USER_ACCT pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.769496 sshd[5391]: Accepted publickey for core from 4.153.228.146 port 59554 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:20.769000 audit[5391]: CRED_ACQ pid=5391 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.769000 audit[5391]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec1f0120 a2=3 a3=0 items=0 ppid=1 pid=5391 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:20.769000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:20.771281 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:20.775649 systemd-logind[1653]: New session 21 of user core. Jan 13 23:51:20.786189 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 13 23:51:20.788000 audit[5391]: USER_START pid=5391 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:20.789000 audit[5395]: CRED_ACQ pid=5395 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:21.261095 kubelet[2897]: E0113 23:51:21.260526 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:51:21.602000 audit[5407]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:21.602000 audit[5407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe992bb40 a2=0 a3=1 items=0 ppid=3065 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:21.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:21.607000 audit[5407]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:21.607000 audit[5407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe992bb40 a2=0 a3=1 items=0 ppid=3065 pid=5407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:21.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:21.625000 audit[5409]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:21.625000 audit[5409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff8ad65c0 a2=0 a3=1 items=0 ppid=3065 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:21.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:21.635000 audit[5409]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5409 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:21.635000 audit[5409]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff8ad65c0 a2=0 a3=1 items=0 ppid=3065 pid=5409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:21.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:21.710426 sshd[5395]: Connection closed by 4.153.228.146 port 59554 Jan 13 23:51:21.710493 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:21.712000 audit[5391]: USER_END pid=5391 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:21.712000 audit[5391]: CRED_DISP pid=5391 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:21.716337 systemd-logind[1653]: Session 21 logged out. Waiting for processes to exit. Jan 13 23:51:21.716528 systemd[1]: sshd@19-10.0.15.225:22-4.153.228.146:59554.service: Deactivated successfully. Jan 13 23:51:21.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.15.225:22-4.153.228.146:59554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:21.720531 systemd[1]: session-21.scope: Deactivated successfully. Jan 13 23:51:21.722399 systemd-logind[1653]: Removed session 21. Jan 13 23:51:21.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.15.225:22-4.153.228.146:59556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:21.821251 systemd[1]: Started sshd@20-10.0.15.225:22-4.153.228.146:59556.service - OpenSSH per-connection server daemon (4.153.228.146:59556). Jan 13 23:51:22.259443 kubelet[2897]: E0113 23:51:22.259375 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:51:22.355000 audit[5414]: USER_ACCT pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.356553 sshd[5414]: Accepted publickey for core from 4.153.228.146 port 59556 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:22.356000 audit[5414]: CRED_ACQ pid=5414 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.356000 audit[5414]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5a92ca0 a2=3 a3=0 items=0 ppid=1 pid=5414 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:22.356000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:22.358198 sshd-session[5414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:22.363110 systemd-logind[1653]: New session 22 of user core. Jan 13 23:51:22.372192 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 13 23:51:22.373000 audit[5414]: USER_START pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.374000 audit[5418]: CRED_ACQ pid=5418 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.833967 sshd[5418]: Connection closed by 4.153.228.146 port 59556 Jan 13 23:51:22.834510 sshd-session[5414]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:22.835000 audit[5414]: USER_END pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.835000 audit[5414]: CRED_DISP pid=5414 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:22.839987 systemd[1]: sshd@20-10.0.15.225:22-4.153.228.146:59556.service: Deactivated successfully. Jan 13 23:51:22.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.15.225:22-4.153.228.146:59556 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:22.841894 systemd[1]: session-22.scope: Deactivated successfully. Jan 13 23:51:22.842798 systemd-logind[1653]: Session 22 logged out. Waiting for processes to exit. Jan 13 23:51:22.844854 systemd-logind[1653]: Removed session 22. Jan 13 23:51:22.945844 systemd[1]: Started sshd@21-10.0.15.225:22-4.153.228.146:59572.service - OpenSSH per-connection server daemon (4.153.228.146:59572). Jan 13 23:51:22.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.15.225:22-4.153.228.146:59572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:23.496000 audit[5429]: USER_ACCT pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.497679 sshd[5429]: Accepted publickey for core from 4.153.228.146 port 59572 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:23.500388 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 13 23:51:23.500466 kernel: audit: type=1101 audit(1768348283.496:861): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.499000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.501444 sshd-session[5429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:23.503207 kernel: audit: type=1103 audit(1768348283.499:862): pid=5429 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.503381 kernel: audit: type=1006 audit(1768348283.499:863): pid=5429 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 13 23:51:23.499000 audit[5429]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4967c60 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:23.507781 kernel: audit: type=1300 audit(1768348283.499:863): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd4967c60 a2=3 a3=0 items=0 ppid=1 pid=5429 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:23.508041 kernel: audit: type=1327 audit(1768348283.499:863): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:23.499000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:23.509803 systemd-logind[1653]: New session 23 of user core. Jan 13 23:51:23.518207 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 13 23:51:23.521000 audit[5429]: USER_START pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.525000 audit[5433]: CRED_ACQ pid=5433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.528543 kernel: audit: type=1105 audit(1768348283.521:864): pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.528605 kernel: audit: type=1103 audit(1768348283.525:865): pid=5433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.864534 sshd[5433]: Connection closed by 4.153.228.146 port 59572 Jan 13 23:51:23.864756 sshd-session[5429]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:23.864000 audit[5429]: USER_END pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.869798 systemd[1]: sshd@21-10.0.15.225:22-4.153.228.146:59572.service: Deactivated successfully. Jan 13 23:51:23.865000 audit[5429]: CRED_DISP pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.872552 kernel: audit: type=1106 audit(1768348283.864:866): pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.872635 kernel: audit: type=1104 audit(1768348283.865:867): pid=5429 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:23.872660 kernel: audit: type=1131 audit(1768348283.869:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.15.225:22-4.153.228.146:59572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:23.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.15.225:22-4.153.228.146:59572 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:23.871600 systemd[1]: session-23.scope: Deactivated successfully. Jan 13 23:51:23.873113 systemd-logind[1653]: Session 23 logged out. Waiting for processes to exit. Jan 13 23:51:23.874412 systemd-logind[1653]: Removed session 23. Jan 13 23:51:25.261167 kubelet[2897]: E0113 23:51:25.261111 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:51:25.434000 audit[5447]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:25.434000 audit[5447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe73d1cd0 a2=0 a3=1 items=0 ppid=3065 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:25.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:25.445000 audit[5447]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5447 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 13 23:51:25.445000 audit[5447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe73d1cd0 a2=0 a3=1 items=0 ppid=3065 pid=5447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:25.445000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 13 23:51:28.976204 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 13 23:51:28.976311 kernel: audit: type=1130 audit(1768348288.973:871): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.15.225:22-4.153.228.146:53768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:28.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.15.225:22-4.153.228.146:53768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:28.974807 systemd[1]: Started sshd@22-10.0.15.225:22-4.153.228.146:53768.service - OpenSSH per-connection server daemon (4.153.228.146:53768). Jan 13 23:51:29.261034 kubelet[2897]: E0113 23:51:29.260106 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:51:29.519000 audit[5451]: USER_ACCT pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.521179 sshd[5451]: Accepted publickey for core from 4.153.228.146 port 53768 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:29.523983 kernel: audit: type=1101 audit(1768348289.519:872): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.523000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.525763 sshd-session[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:29.528446 kernel: audit: type=1103 audit(1768348289.523:873): pid=5451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.528501 kernel: audit: type=1006 audit(1768348289.524:874): pid=5451 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 13 23:51:29.524000 audit[5451]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff93738b0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:29.530709 systemd-logind[1653]: New session 24 of user core. Jan 13 23:51:29.531720 kernel: audit: type=1300 audit(1768348289.524:874): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff93738b0 a2=3 a3=0 items=0 ppid=1 pid=5451 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:29.531745 kernel: audit: type=1327 audit(1768348289.524:874): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:29.524000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:29.539165 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 13 23:51:29.540000 audit[5451]: USER_START pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.543000 audit[5455]: CRED_ACQ pid=5455 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.547392 kernel: audit: type=1105 audit(1768348289.540:875): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.547459 kernel: audit: type=1103 audit(1768348289.543:876): pid=5455 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.901918 sshd[5455]: Connection closed by 4.153.228.146 port 53768 Jan 13 23:51:29.902590 sshd-session[5451]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:29.902000 audit[5451]: USER_END pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.903000 audit[5451]: CRED_DISP pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.908453 systemd[1]: sshd@22-10.0.15.225:22-4.153.228.146:53768.service: Deactivated successfully. Jan 13 23:51:29.910337 kernel: audit: type=1106 audit(1768348289.902:877): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.910397 kernel: audit: type=1104 audit(1768348289.903:878): pid=5451 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:29.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.15.225:22-4.153.228.146:53768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:29.910546 systemd[1]: session-24.scope: Deactivated successfully. Jan 13 23:51:29.911397 systemd-logind[1653]: Session 24 logged out. Waiting for processes to exit. Jan 13 23:51:29.912703 systemd-logind[1653]: Removed session 24. Jan 13 23:51:31.258503 kubelet[2897]: E0113 23:51:31.258454 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:51:35.012917 systemd[1]: Started sshd@23-10.0.15.225:22-4.153.228.146:45122.service - OpenSSH per-connection server daemon (4.153.228.146:45122). Jan 13 23:51:35.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.15.225:22-4.153.228.146:45122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:35.016187 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:35.016257 kernel: audit: type=1130 audit(1768348295.012:880): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.15.225:22-4.153.228.146:45122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:35.258724 kubelet[2897]: E0113 23:51:35.258640 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:51:35.542000 audit[5471]: USER_ACCT pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.544182 sshd[5471]: Accepted publickey for core from 4.153.228.146 port 45122 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:35.547008 kernel: audit: type=1101 audit(1768348295.542:881): pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.547085 kernel: audit: type=1103 audit(1768348295.546:882): pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.546000 audit[5471]: CRED_ACQ pid=5471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.547918 sshd-session[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:35.550989 kernel: audit: type=1006 audit(1768348295.546:883): pid=5471 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 13 23:51:35.551061 kernel: audit: type=1300 audit(1768348295.546:883): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4ec4b60 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:35.546000 audit[5471]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4ec4b60 a2=3 a3=0 items=0 ppid=1 pid=5471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:35.546000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:35.555026 kernel: audit: type=1327 audit(1768348295.546:883): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:35.554497 systemd-logind[1653]: New session 25 of user core. Jan 13 23:51:35.564244 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 13 23:51:35.565000 audit[5471]: USER_START pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.567000 audit[5475]: CRED_ACQ pid=5475 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.572683 kernel: audit: type=1105 audit(1768348295.565:884): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.572746 kernel: audit: type=1103 audit(1768348295.567:885): pid=5475 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.933419 sshd[5475]: Connection closed by 4.153.228.146 port 45122 Jan 13 23:51:35.934131 sshd-session[5471]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:35.936000 audit[5471]: USER_END pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.940233 systemd[1]: sshd@23-10.0.15.225:22-4.153.228.146:45122.service: Deactivated successfully. Jan 13 23:51:35.936000 audit[5471]: CRED_DISP pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.942084 systemd[1]: session-25.scope: Deactivated successfully. Jan 13 23:51:35.943650 kernel: audit: type=1106 audit(1768348295.936:886): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.943718 kernel: audit: type=1104 audit(1768348295.936:887): pid=5471 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:35.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.15.225:22-4.153.228.146:45122 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:35.943382 systemd-logind[1653]: Session 25 logged out. Waiting for processes to exit. Jan 13 23:51:35.945609 systemd-logind[1653]: Removed session 25. Jan 13 23:51:36.257943 kubelet[2897]: E0113 23:51:36.257843 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:51:37.261021 kubelet[2897]: E0113 23:51:37.259427 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:51:40.259614 kubelet[2897]: E0113 23:51:40.259455 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:51:40.260086 kubelet[2897]: E0113 23:51:40.260023 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:51:41.045072 systemd[1]: Started sshd@24-10.0.15.225:22-4.153.228.146:45130.service - OpenSSH per-connection server daemon (4.153.228.146:45130). Jan 13 23:51:41.046082 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:41.046118 kernel: audit: type=1130 audit(1768348301.044:889): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.15.225:22-4.153.228.146:45130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:41.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.15.225:22-4.153.228.146:45130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:41.572000 audit[5515]: USER_ACCT pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.573190 sshd[5515]: Accepted publickey for core from 4.153.228.146 port 45130 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:41.574000 audit[5515]: CRED_ACQ pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.575856 sshd-session[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:41.578555 kernel: audit: type=1101 audit(1768348301.572:890): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.578611 kernel: audit: type=1103 audit(1768348301.574:891): pid=5515 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.578637 kernel: audit: type=1006 audit(1768348301.574:892): pid=5515 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jan 13 23:51:41.574000 audit[5515]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecec5f80 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:41.581047 systemd-logind[1653]: New session 26 of user core. Jan 13 23:51:41.583051 kernel: audit: type=1300 audit(1768348301.574:892): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffecec5f80 a2=3 a3=0 items=0 ppid=1 pid=5515 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:41.583122 kernel: audit: type=1327 audit(1768348301.574:892): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:41.574000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:41.594166 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 13 23:51:41.596000 audit[5515]: USER_START pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.599000 audit[5519]: CRED_ACQ pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.603430 kernel: audit: type=1105 audit(1768348301.596:893): pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.603490 kernel: audit: type=1103 audit(1768348301.599:894): pid=5519 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.927211 sshd[5519]: Connection closed by 4.153.228.146 port 45130 Jan 13 23:51:41.925911 sshd-session[5515]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:41.927000 audit[5515]: USER_END pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.931893 systemd[1]: sshd@24-10.0.15.225:22-4.153.228.146:45130.service: Deactivated successfully. Jan 13 23:51:41.933620 systemd[1]: session-26.scope: Deactivated successfully. Jan 13 23:51:41.927000 audit[5515]: CRED_DISP pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.936370 kernel: audit: type=1106 audit(1768348301.927:895): pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.936426 kernel: audit: type=1104 audit(1768348301.927:896): pid=5515 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:41.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.15.225:22-4.153.228.146:45130 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:41.936765 systemd-logind[1653]: Session 26 logged out. Waiting for processes to exit. Jan 13 23:51:41.938478 systemd-logind[1653]: Removed session 26. Jan 13 23:51:45.259205 kubelet[2897]: E0113 23:51:45.259151 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:51:47.037704 systemd[1]: Started sshd@25-10.0.15.225:22-4.153.228.146:52908.service - OpenSSH per-connection server daemon (4.153.228.146:52908). Jan 13 23:51:47.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.15.225:22-4.153.228.146:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:47.038977 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:47.039057 kernel: audit: type=1130 audit(1768348307.036:898): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.15.225:22-4.153.228.146:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:47.578000 audit[5533]: USER_ACCT pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.579535 sshd[5533]: Accepted publickey for core from 4.153.228.146 port 52908 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:47.581000 audit[5533]: CRED_ACQ pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.583019 kernel: audit: type=1101 audit(1768348307.578:899): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.583718 sshd-session[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:47.587484 kernel: audit: type=1103 audit(1768348307.581:900): pid=5533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.587576 kernel: audit: type=1006 audit(1768348307.581:901): pid=5533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jan 13 23:51:47.581000 audit[5533]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3368500 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:47.590628 kernel: audit: type=1300 audit(1768348307.581:901): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3368500 a2=3 a3=0 items=0 ppid=1 pid=5533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:47.588518 systemd-logind[1653]: New session 27 of user core. Jan 13 23:51:47.581000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:47.592006 kernel: audit: type=1327 audit(1768348307.581:901): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:47.594140 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 13 23:51:47.596000 audit[5533]: USER_START pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.600000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.603905 kernel: audit: type=1105 audit(1768348307.596:902): pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.603972 kernel: audit: type=1103 audit(1768348307.600:903): pid=5537 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.933900 sshd[5537]: Connection closed by 4.153.228.146 port 52908 Jan 13 23:51:47.934419 sshd-session[5533]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:47.934000 audit[5533]: USER_END pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.939189 systemd[1]: sshd@25-10.0.15.225:22-4.153.228.146:52908.service: Deactivated successfully. Jan 13 23:51:47.935000 audit[5533]: CRED_DISP pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.941840 kernel: audit: type=1106 audit(1768348307.934:904): pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.942206 kernel: audit: type=1104 audit(1768348307.935:905): pid=5533 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:47.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.15.225:22-4.153.228.146:52908 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:47.941878 systemd[1]: session-27.scope: Deactivated successfully. Jan 13 23:51:47.942927 systemd-logind[1653]: Session 27 logged out. Waiting for processes to exit. Jan 13 23:51:47.944472 systemd-logind[1653]: Removed session 27. Jan 13 23:51:49.258878 kubelet[2897]: E0113 23:51:49.258419 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:51:51.258124 kubelet[2897]: E0113 23:51:51.258072 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:51:51.259005 kubelet[2897]: E0113 23:51:51.258974 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:51:51.259968 kubelet[2897]: E0113 23:51:51.259919 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:51:53.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.15.225:22-4.153.228.146:52912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:53.045996 systemd[1]: Started sshd@26-10.0.15.225:22-4.153.228.146:52912.service - OpenSSH per-connection server daemon (4.153.228.146:52912). Jan 13 23:51:53.049452 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:51:53.049524 kernel: audit: type=1130 audit(1768348313.045:907): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.15.225:22-4.153.228.146:52912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:53.260043 containerd[1676]: time="2026-01-13T23:51:53.259645743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 13 23:51:53.575000 audit[5558]: USER_ACCT pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.576707 sshd[5558]: Accepted publickey for core from 4.153.228.146 port 52912 ssh2: RSA SHA256:haOep86fHRthch+6JNrepA6CHLmaxUOp0Pc5n2BWN0I Jan 13 23:51:53.580999 kernel: audit: type=1101 audit(1768348313.575:908): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.581067 kernel: audit: type=1103 audit(1768348313.579:909): pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.579000 audit[5558]: CRED_ACQ pid=5558 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.581740 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 23:51:53.585836 kernel: audit: type=1006 audit(1768348313.580:910): pid=5558 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jan 13 23:51:53.580000 audit[5558]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1cfbed0 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:53.587673 systemd-logind[1653]: New session 28 of user core. Jan 13 23:51:53.589819 kernel: audit: type=1300 audit(1768348313.580:910): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff1cfbed0 a2=3 a3=0 items=0 ppid=1 pid=5558 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:51:53.589879 kernel: audit: type=1327 audit(1768348313.580:910): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:53.580000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 13 23:51:53.596160 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 13 23:51:53.599000 audit[5558]: USER_START pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.601000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.604853 containerd[1676]: time="2026-01-13T23:51:53.604815000Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:51:53.606354 containerd[1676]: time="2026-01-13T23:51:53.606258247Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 13 23:51:53.606539 containerd[1676]: time="2026-01-13T23:51:53.606307407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 13 23:51:53.606776 kernel: audit: type=1105 audit(1768348313.599:911): pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.606827 kernel: audit: type=1103 audit(1768348313.601:912): pid=5562 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.606858 kubelet[2897]: E0113 23:51:53.606664 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:51:53.606858 kubelet[2897]: E0113 23:51:53.606716 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 13 23:51:53.607440 kubelet[2897]: E0113 23:51:53.607360 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb5fe9369c744a3f8b622ec6bf599501,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 13 23:51:53.609378 containerd[1676]: time="2026-01-13T23:51:53.609171821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 13 23:51:53.958775 containerd[1676]: time="2026-01-13T23:51:53.958632940Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:51:53.960125 containerd[1676]: time="2026-01-13T23:51:53.960067987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 13 23:51:53.960229 containerd[1676]: time="2026-01-13T23:51:53.960104467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 13 23:51:53.960401 kubelet[2897]: E0113 23:51:53.960363 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:51:53.960497 kubelet[2897]: E0113 23:51:53.960481 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 13 23:51:53.960738 kubelet[2897]: E0113 23:51:53.960687 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6vt5w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9f6f7744d-z7hgg_calico-system(5a088dfa-d478-4f72-9896-431d1ff39b0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 13 23:51:53.961993 kubelet[2897]: E0113 23:51:53.961940 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:51:53.981671 sshd[5562]: Connection closed by 4.153.228.146 port 52912 Jan 13 23:51:53.981998 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Jan 13 23:51:53.982000 audit[5558]: USER_END pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.982000 audit[5558]: CRED_DISP pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.989946 kernel: audit: type=1106 audit(1768348313.982:913): pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.990047 kernel: audit: type=1104 audit(1768348313.982:914): pid=5558 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=4.153.228.146 addr=4.153.228.146 terminal=ssh res=success' Jan 13 23:51:53.990369 systemd[1]: sshd@26-10.0.15.225:22-4.153.228.146:52912.service: Deactivated successfully. Jan 13 23:51:53.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.15.225:22-4.153.228.146:52912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 13 23:51:53.992106 systemd[1]: session-28.scope: Deactivated successfully. Jan 13 23:51:53.993018 systemd-logind[1653]: Session 28 logged out. Waiting for processes to exit. Jan 13 23:51:53.994466 systemd-logind[1653]: Removed session 28. Jan 13 23:51:57.259429 kubelet[2897]: E0113 23:51:57.259298 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:52:03.259683 kubelet[2897]: E0113 23:52:03.259593 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:52:04.258793 kubelet[2897]: E0113 23:52:04.258730 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:52:04.259275 kubelet[2897]: E0113 23:52:04.259240 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:52:05.260186 containerd[1676]: time="2026-01-13T23:52:05.259707354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 13 23:52:05.585160 containerd[1676]: time="2026-01-13T23:52:05.584692112Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:05.586296 containerd[1676]: time="2026-01-13T23:52:05.586236879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 13 23:52:05.586357 containerd[1676]: time="2026-01-13T23:52:05.586321040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:05.586503 kubelet[2897]: E0113 23:52:05.586461 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:52:05.586890 kubelet[2897]: E0113 23:52:05.586510 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 13 23:52:05.586890 kubelet[2897]: E0113 23:52:05.586645 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-85ncl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-kd8rx_calico-system(1568cf69-d227-4d6c-8d10-61ba58db7902): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:05.588120 kubelet[2897]: E0113 23:52:05.588086 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:52:08.260412 kubelet[2897]: E0113 23:52:08.260255 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:52:10.259201 containerd[1676]: time="2026-01-13T23:52:10.259152420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 13 23:52:10.592211 containerd[1676]: time="2026-01-13T23:52:10.591930736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:10.593705 containerd[1676]: time="2026-01-13T23:52:10.593609385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 13 23:52:10.593786 containerd[1676]: time="2026-01-13T23:52:10.593684025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:10.593885 kubelet[2897]: E0113 23:52:10.593846 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:52:10.594181 kubelet[2897]: E0113 23:52:10.593899 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 13 23:52:10.594181 kubelet[2897]: E0113 23:52:10.594026 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5nz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-65d475c445-nfbnr_calico-system(705f5b22-6117-46b0-94d6-546618492a26): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:10.595218 kubelet[2897]: E0113 23:52:10.595191 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:52:15.261268 containerd[1676]: time="2026-01-13T23:52:15.261186178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:52:15.586815 containerd[1676]: time="2026-01-13T23:52:15.586581258Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:15.588039 containerd[1676]: time="2026-01-13T23:52:15.587931345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:52:15.588039 containerd[1676]: time="2026-01-13T23:52:15.587988505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:15.588198 kubelet[2897]: E0113 23:52:15.588140 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:52:15.588198 kubelet[2897]: E0113 23:52:15.588189 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:52:15.588513 kubelet[2897]: E0113 23:52:15.588307 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h8svz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-h454c_calico-apiserver(4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:15.589503 kubelet[2897]: E0113 23:52:15.589469 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:52:17.259595 containerd[1676]: time="2026-01-13T23:52:17.259556845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 13 23:52:17.602536 containerd[1676]: time="2026-01-13T23:52:17.602270850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:17.603601 containerd[1676]: time="2026-01-13T23:52:17.603568417Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 13 23:52:17.603681 containerd[1676]: time="2026-01-13T23:52:17.603648977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:17.604102 kubelet[2897]: E0113 23:52:17.603841 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:52:17.604102 kubelet[2897]: E0113 23:52:17.603893 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 13 23:52:17.604102 kubelet[2897]: E0113 23:52:17.604048 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzspl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5ddfc947b7-qj4hz_calico-apiserver(feb12ef9-da4b-41c3-8609-097b9429383b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:17.605250 kubelet[2897]: E0113 23:52:17.605217 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:52:18.258621 kubelet[2897]: E0113 23:52:18.258574 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:52:18.258926 containerd[1676]: time="2026-01-13T23:52:18.258683798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 13 23:52:18.593257 containerd[1676]: time="2026-01-13T23:52:18.593105003Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:18.595064 containerd[1676]: time="2026-01-13T23:52:18.595021332Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 13 23:52:18.595460 containerd[1676]: time="2026-01-13T23:52:18.595112933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:18.595513 kubelet[2897]: E0113 23:52:18.595217 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:52:18.595513 kubelet[2897]: E0113 23:52:18.595269 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 13 23:52:18.595513 kubelet[2897]: E0113 23:52:18.595369 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:18.597705 containerd[1676]: time="2026-01-13T23:52:18.597682985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 13 23:52:18.906093 containerd[1676]: time="2026-01-13T23:52:18.905949581Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 13 23:52:18.907406 containerd[1676]: time="2026-01-13T23:52:18.907337668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 13 23:52:18.907547 containerd[1676]: time="2026-01-13T23:52:18.907425668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 13 23:52:18.907673 kubelet[2897]: E0113 23:52:18.907620 2897 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:52:18.907936 kubelet[2897]: E0113 23:52:18.907672 2897 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 13 23:52:18.907936 kubelet[2897]: E0113 23:52:18.907788 2897 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brggj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-8bgtx_calico-system(9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 13 23:52:18.909032 kubelet[2897]: E0113 23:52:18.908979 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8bgtx" podUID="9767bcb1-e61e-4a0b-9b29-2c0eaa4146c5" Jan 13 23:52:19.261471 kubelet[2897]: E0113 23:52:19.261418 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:52:20.487937 systemd[1]: cri-containerd-faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac.scope: Deactivated successfully. Jan 13 23:52:20.488601 systemd[1]: cri-containerd-faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac.scope: Consumed 38.313s CPU time, 101.6M memory peak. Jan 13 23:52:20.490556 containerd[1676]: time="2026-01-13T23:52:20.490485893Z" level=info msg="received container exit event container_id:\"faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac\" id:\"faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac\" pid:3234 exit_status:1 exited_at:{seconds:1768348340 nanos:490079931}" Jan 13 23:52:20.492000 audit: BPF prog-id=146 op=UNLOAD Jan 13 23:52:20.495563 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 13 23:52:20.495616 kernel: audit: type=1334 audit(1768348340.492:916): prog-id=146 op=UNLOAD Jan 13 23:52:20.492000 audit: BPF prog-id=150 op=UNLOAD Jan 13 23:52:20.497340 kernel: audit: type=1334 audit(1768348340.492:917): prog-id=150 op=UNLOAD Jan 13 23:52:20.513449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac-rootfs.mount: Deactivated successfully. Jan 13 23:52:20.687273 kubelet[2897]: E0113 23:52:20.687125 2897 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.15.225:38130->10.0.15.198:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-kube-controllers-65d475c445-nfbnr.188a6f4702bb4576 calico-system 1309 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-65d475c445-nfbnr,UID:705f5b22-6117-46b0-94d6-546618492a26,APIVersion:v1,ResourceVersion:788,FieldPath:spec.containers{calico-kube-controllers},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4578-0-0-p-89582bef9b,},FirstTimestamp:2026-01-13 23:49:14 +0000 UTC,LastTimestamp:2026-01-13 23:52:10.258600017 +0000 UTC m=+223.086942229,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4578-0-0-p-89582bef9b,}" Jan 13 23:52:20.813860 kubelet[2897]: I0113 23:52:20.813749 2897 scope.go:117] "RemoveContainer" containerID="faf600fc1d0b1978a2b00ba3e88f9536c7c44d1ac8056cc22b07190dcef4cbac" Jan 13 23:52:20.815422 containerd[1676]: time="2026-01-13T23:52:20.815375011Z" level=info msg="CreateContainer within sandbox \"ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 13 23:52:20.824152 containerd[1676]: time="2026-01-13T23:52:20.824114654Z" level=info msg="Container d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:52:20.831402 containerd[1676]: time="2026-01-13T23:52:20.831348489Z" level=info msg="CreateContainer within sandbox \"ac861483433fed9089498c1fa1cba19b090a11e48dd0d180be89fcd61beb8be4\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82\"" Jan 13 23:52:20.832019 containerd[1676]: time="2026-01-13T23:52:20.831847572Z" level=info msg="StartContainer for \"d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82\"" Jan 13 23:52:20.832810 containerd[1676]: time="2026-01-13T23:52:20.832781777Z" level=info msg="connecting to shim d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82" address="unix:///run/containerd/s/9c6468b24e89e1d26cb51786e3f4eea1a7897a900d52ae0123ad134344d813dc" protocol=ttrpc version=3 Jan 13 23:52:20.850324 systemd[1]: Started cri-containerd-d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82.scope - libcontainer container d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82. Jan 13 23:52:20.860000 audit: BPF prog-id=256 op=LOAD Jan 13 23:52:20.862976 kernel: audit: type=1334 audit(1768348340.860:918): prog-id=256 op=LOAD Jan 13 23:52:20.863054 kernel: audit: type=1334 audit(1768348340.860:919): prog-id=257 op=LOAD Jan 13 23:52:20.860000 audit: BPF prog-id=257 op=LOAD Jan 13 23:52:20.860000 audit[5630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.869670 kernel: audit: type=1300 audit(1768348340.860:919): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.869722 kernel: audit: type=1327 audit(1768348340.860:919): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.869744 kernel: audit: type=1334 audit(1768348340.861:920): prog-id=257 op=UNLOAD Jan 13 23:52:20.861000 audit: BPF prog-id=257 op=UNLOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.873191 kernel: audit: type=1300 audit(1768348340.861:920): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.873264 kernel: audit: type=1327 audit(1768348340.861:920): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: BPF prog-id=258 op=LOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: BPF prog-id=259 op=LOAD Jan 13 23:52:20.877011 kernel: audit: type=1334 audit(1768348340.861:921): prog-id=258 op=LOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: BPF prog-id=259 op=UNLOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: BPF prog-id=258 op=UNLOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.861000 audit: BPF prog-id=260 op=LOAD Jan 13 23:52:20.861000 audit[5630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2970 pid=5630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:20.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430653266383266343161633439316333333965333264356338373761 Jan 13 23:52:20.891814 containerd[1676]: time="2026-01-13T23:52:20.891750946Z" level=info msg="StartContainer for \"d0e2f82f41ac491c339e32d5c877a436c0977ed9953dc3f399c2c75f6f057f82\" returns successfully" Jan 13 23:52:20.897774 systemd[1]: cri-containerd-b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d.scope: Deactivated successfully. Jan 13 23:52:20.898655 systemd[1]: cri-containerd-b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d.scope: Consumed 4.370s CPU time, 56.8M memory peak. Jan 13 23:52:20.898000 audit: BPF prog-id=261 op=LOAD Jan 13 23:52:20.898000 audit: BPF prog-id=88 op=UNLOAD Jan 13 23:52:20.902219 containerd[1676]: time="2026-01-13T23:52:20.902170878Z" level=info msg="received container exit event container_id:\"b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d\" id:\"b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d\" pid:2746 exit_status:1 exited_at:{seconds:1768348340 nanos:900518830}" Jan 13 23:52:20.904000 audit: BPF prog-id=103 op=UNLOAD Jan 13 23:52:20.904000 audit: BPF prog-id=107 op=UNLOAD Jan 13 23:52:20.931450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d-rootfs.mount: Deactivated successfully. Jan 13 23:52:20.943252 kubelet[2897]: E0113 23:52:20.943200 2897 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.15.225:38342->10.0.15.198:2379: read: connection timed out" Jan 13 23:52:20.947409 systemd[1]: cri-containerd-81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f.scope: Deactivated successfully. Jan 13 23:52:20.947738 systemd[1]: cri-containerd-81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f.scope: Consumed 3.811s CPU time, 22.4M memory peak. Jan 13 23:52:20.946000 audit: BPF prog-id=262 op=LOAD Jan 13 23:52:20.946000 audit: BPF prog-id=90 op=UNLOAD Jan 13 23:52:20.948883 containerd[1676]: time="2026-01-13T23:52:20.948849507Z" level=info msg="received container exit event container_id:\"81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f\" id:\"81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f\" pid:2760 exit_status:1 exited_at:{seconds:1768348340 nanos:948376625}" Jan 13 23:52:20.950000 audit: BPF prog-id=108 op=UNLOAD Jan 13 23:52:20.950000 audit: BPF prog-id=112 op=UNLOAD Jan 13 23:52:21.513770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f-rootfs.mount: Deactivated successfully. Jan 13 23:52:21.817104 kubelet[2897]: I0113 23:52:21.816994 2897 scope.go:117] "RemoveContainer" containerID="81f6dd8edd27edae377c66c1eb0dc6d90ceb1465da3abba96d95c4d42331eb3f" Jan 13 23:52:21.819194 kubelet[2897]: I0113 23:52:21.819139 2897 scope.go:117] "RemoveContainer" containerID="b41fa7d72a9e5a5adecf445fd6d12112b1b200c83a0f22da789758f626a0a51d" Jan 13 23:52:21.819374 containerd[1676]: time="2026-01-13T23:52:21.819176307Z" level=info msg="CreateContainer within sandbox \"588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 13 23:52:21.821278 containerd[1676]: time="2026-01-13T23:52:21.821235477Z" level=info msg="CreateContainer within sandbox \"ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 13 23:52:21.830900 containerd[1676]: time="2026-01-13T23:52:21.830861925Z" level=info msg="Container e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:52:21.840864 containerd[1676]: time="2026-01-13T23:52:21.840818374Z" level=info msg="Container 02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f: CDI devices from CRI Config.CDIDevices: []" Jan 13 23:52:21.845901 containerd[1676]: time="2026-01-13T23:52:21.845850318Z" level=info msg="CreateContainer within sandbox \"588361ee845bde67d8c702f874872ef58f753317e02d8ad26ce66f934e706656\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e\"" Jan 13 23:52:21.846578 containerd[1676]: time="2026-01-13T23:52:21.846553842Z" level=info msg="StartContainer for \"e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e\"" Jan 13 23:52:21.847846 containerd[1676]: time="2026-01-13T23:52:21.847779568Z" level=info msg="connecting to shim e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e" address="unix:///run/containerd/s/c0fd7e154dabf4b4a53f97611200e5f1ca3eeb82f6570b984ca8e8dc409e32a9" protocol=ttrpc version=3 Jan 13 23:52:21.849701 containerd[1676]: time="2026-01-13T23:52:21.849600737Z" level=info msg="CreateContainer within sandbox \"ee56e12b21dc84ab6a13076ba9891b2e3ec3ab29cb5e42fcff58aa9b67d8580f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f\"" Jan 13 23:52:21.850151 containerd[1676]: time="2026-01-13T23:52:21.850124099Z" level=info msg="StartContainer for \"02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f\"" Jan 13 23:52:21.851215 containerd[1676]: time="2026-01-13T23:52:21.851176425Z" level=info msg="connecting to shim 02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f" address="unix:///run/containerd/s/2ddec7f63de07423cb81d90512e77f77c5ef399975cb698e78704cf09571c11a" protocol=ttrpc version=3 Jan 13 23:52:21.867185 systemd[1]: Started cri-containerd-e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e.scope - libcontainer container e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e. Jan 13 23:52:21.871178 systemd[1]: Started cri-containerd-02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f.scope - libcontainer container 02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f. Jan 13 23:52:21.879000 audit: BPF prog-id=263 op=LOAD Jan 13 23:52:21.880000 audit: BPF prog-id=264 op=LOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=264 op=UNLOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=265 op=LOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=266 op=LOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=266 op=UNLOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=265 op=UNLOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.880000 audit: BPF prog-id=267 op=LOAD Jan 13 23:52:21.880000 audit[5686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2620 pid=5686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6530663066303738376231393064316364623237623065376261383831 Jan 13 23:52:21.882000 audit: BPF prog-id=268 op=LOAD Jan 13 23:52:21.883000 audit: BPF prog-id=269 op=LOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=269 op=UNLOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=270 op=LOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=271 op=LOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=271 op=UNLOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=270 op=UNLOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.883000 audit: BPF prog-id=272 op=LOAD Jan 13 23:52:21.883000 audit[5693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2622 pid=5693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 13 23:52:21.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032653336323265653564373639326534383336343639333232666562 Jan 13 23:52:21.918068 containerd[1676]: time="2026-01-13T23:52:21.917988713Z" level=info msg="StartContainer for \"02e3622ee5d7692e4836469322febcb71aae3f515d2a8ce74617ce6cc7e2080f\" returns successfully" Jan 13 23:52:21.918338 containerd[1676]: time="2026-01-13T23:52:21.918306835Z" level=info msg="StartContainer for \"e0f0f0787b190d1cdb27b0e7ba881dcf1ac12ce4408d23617135de27f330849e\" returns successfully" Jan 13 23:52:22.798891 kubelet[2897]: I0113 23:52:22.798833 2897 status_manager.go:890] "Failed to get status for pod" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" pod="calico-system/goldmane-666569f655-kd8rx" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.15.225:38240->10.0.15.198:2379: read: connection timed out" Jan 13 23:52:25.258790 kubelet[2897]: E0113 23:52:25.258661 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-65d475c445-nfbnr" podUID="705f5b22-6117-46b0-94d6-546618492a26" Jan 13 23:52:29.258630 kubelet[2897]: E0113 23:52:29.258584 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-kd8rx" podUID="1568cf69-d227-4d6c-8d10-61ba58db7902" Jan 13 23:52:30.123034 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Jan 13 23:52:30.258357 kubelet[2897]: E0113 23:52:30.258274 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-h454c" podUID="4f1e13c3-2a68-4c84-a7e2-50f4d7a91f6f" Jan 13 23:52:30.258849 kubelet[2897]: E0113 23:52:30.258722 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9f6f7744d-z7hgg" podUID="5a088dfa-d478-4f72-9896-431d1ff39b0b" Jan 13 23:52:30.258849 kubelet[2897]: E0113 23:52:30.258790 2897 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5ddfc947b7-qj4hz" podUID="feb12ef9-da4b-41c3-8609-097b9429383b" Jan 13 23:52:30.943696 kubelet[2897]: E0113 23:52:30.943443 2897 controller.go:195] "Failed to update lease" err="Put \"https://10.0.15.225:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4578-0-0-p-89582bef9b?timeout=10s\": context deadline exceeded"