Jan 14 23:42:41.493210 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 23:42:41.493237 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 14 22:02:18 -00 2026 Jan 14 23:42:41.493247 kernel: KASLR enabled Jan 14 23:42:41.493254 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 14 23:42:41.493260 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 14 23:42:41.493266 kernel: random: crng init done Jan 14 23:42:41.493274 kernel: secureboot: Secure boot disabled Jan 14 23:42:41.493280 kernel: ACPI: Early table checksum verification disabled Jan 14 23:42:41.493287 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 14 23:42:41.493295 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 14 23:42:41.493302 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493308 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493316 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493322 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493331 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493338 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493346 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493353 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493360 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:41.493366 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 23:42:41.493373 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 14 23:42:41.493380 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 23:42:41.493387 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 23:42:41.493898 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 14 23:42:41.493911 kernel: Zone ranges: Jan 14 23:42:41.493918 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 23:42:41.493926 kernel: DMA32 empty Jan 14 23:42:41.493933 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 14 23:42:41.493940 kernel: Device empty Jan 14 23:42:41.493947 kernel: Movable zone start for each node Jan 14 23:42:41.493955 kernel: Early memory node ranges Jan 14 23:42:41.493962 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 14 23:42:41.493969 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 14 23:42:41.493976 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 14 23:42:41.493983 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 14 23:42:41.493996 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 14 23:42:41.494003 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 14 23:42:41.494010 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 14 23:42:41.494017 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 14 23:42:41.494024 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 14 23:42:41.494034 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 23:42:41.494043 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 14 23:42:41.494051 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 14 23:42:41.494058 kernel: psci: probing for conduit method from ACPI. Jan 14 23:42:41.494066 kernel: psci: PSCIv1.1 detected in firmware. Jan 14 23:42:41.494073 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 23:42:41.494080 kernel: psci: Trusted OS migration not required Jan 14 23:42:41.494087 kernel: psci: SMC Calling Convention v1.1 Jan 14 23:42:41.494095 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 14 23:42:41.494104 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 23:42:41.494111 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 23:42:41.494119 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 23:42:41.494127 kernel: Detected PIPT I-cache on CPU0 Jan 14 23:42:41.494134 kernel: CPU features: detected: GIC system register CPU interface Jan 14 23:42:41.494141 kernel: CPU features: detected: Spectre-v4 Jan 14 23:42:41.494149 kernel: CPU features: detected: Spectre-BHB Jan 14 23:42:41.494156 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 23:42:41.494164 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 23:42:41.494171 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 23:42:41.494178 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 23:42:41.494188 kernel: alternatives: applying boot alternatives Jan 14 23:42:41.494197 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:42:41.494205 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 23:42:41.494212 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 23:42:41.494220 kernel: Fallback order for Node 0: 0 Jan 14 23:42:41.494228 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 14 23:42:41.494235 kernel: Policy zone: Normal Jan 14 23:42:41.494242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 23:42:41.494250 kernel: software IO TLB: area num 2. Jan 14 23:42:41.494257 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 14 23:42:41.494266 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 23:42:41.494274 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 23:42:41.494282 kernel: rcu: RCU event tracing is enabled. Jan 14 23:42:41.494289 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 23:42:41.494297 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 23:42:41.494304 kernel: Tracing variant of Tasks RCU enabled. Jan 14 23:42:41.494312 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 23:42:41.494319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 23:42:41.494327 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:42:41.494334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:42:41.494342 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 23:42:41.494351 kernel: GICv3: 256 SPIs implemented Jan 14 23:42:41.494358 kernel: GICv3: 0 Extended SPIs implemented Jan 14 23:42:41.494365 kernel: Root IRQ handler: gic_handle_irq Jan 14 23:42:41.494373 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 23:42:41.494380 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 23:42:41.494387 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 14 23:42:41.494752 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 14 23:42:41.494763 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 14 23:42:41.494771 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 14 23:42:41.494779 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 14 23:42:41.494787 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 14 23:42:41.494799 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 23:42:41.494807 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:42:41.494814 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 23:42:41.494822 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 23:42:41.494830 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 23:42:41.494837 kernel: Console: colour dummy device 80x25 Jan 14 23:42:41.494846 kernel: ACPI: Core revision 20240827 Jan 14 23:42:41.494854 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 23:42:41.494900 kernel: pid_max: default: 32768 minimum: 301 Jan 14 23:42:41.494912 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 23:42:41.494920 kernel: landlock: Up and running. Jan 14 23:42:41.494928 kernel: SELinux: Initializing. Jan 14 23:42:41.494936 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:42:41.494944 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:42:41.494952 kernel: rcu: Hierarchical SRCU implementation. Jan 14 23:42:41.494960 kernel: rcu: Max phase no-delay instances is 400. Jan 14 23:42:41.494969 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 23:42:41.494984 kernel: Remapping and enabling EFI services. Jan 14 23:42:41.494992 kernel: smp: Bringing up secondary CPUs ... Jan 14 23:42:41.494999 kernel: Detected PIPT I-cache on CPU1 Jan 14 23:42:41.495007 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 14 23:42:41.495016 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 14 23:42:41.495024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:42:41.495031 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 23:42:41.495042 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 23:42:41.495050 kernel: SMP: Total of 2 processors activated. Jan 14 23:42:41.495063 kernel: CPU: All CPU(s) started at EL1 Jan 14 23:42:41.495073 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 23:42:41.495081 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 23:42:41.495089 kernel: CPU features: detected: Common not Private translations Jan 14 23:42:41.495098 kernel: CPU features: detected: CRC32 instructions Jan 14 23:42:41.495106 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 14 23:42:41.495116 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 23:42:41.495124 kernel: CPU features: detected: LSE atomic instructions Jan 14 23:42:41.495133 kernel: CPU features: detected: Privileged Access Never Jan 14 23:42:41.495141 kernel: CPU features: detected: RAS Extension Support Jan 14 23:42:41.495149 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 23:42:41.495157 kernel: alternatives: applying system-wide alternatives Jan 14 23:42:41.495167 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 14 23:42:41.495176 kernel: Memory: 3885988K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 188532K reserved, 16384K cma-reserved) Jan 14 23:42:41.495185 kernel: devtmpfs: initialized Jan 14 23:42:41.495193 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 23:42:41.495202 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 23:42:41.496258 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 23:42:41.496279 kernel: 0 pages in range for non-PLT usage Jan 14 23:42:41.496295 kernel: 515184 pages in range for PLT usage Jan 14 23:42:41.496303 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 23:42:41.496312 kernel: SMBIOS 3.0.0 present. Jan 14 23:42:41.496320 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 14 23:42:41.496328 kernel: DMI: Memory slots populated: 1/1 Jan 14 23:42:41.496337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 23:42:41.496346 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 23:42:41.496356 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 23:42:41.496365 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 23:42:41.496373 kernel: audit: initializing netlink subsys (disabled) Jan 14 23:42:41.496381 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 14 23:42:41.496390 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 23:42:41.496421 kernel: cpuidle: using governor menu Jan 14 23:42:41.496432 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 23:42:41.496445 kernel: ASID allocator initialised with 32768 entries Jan 14 23:42:41.496454 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 23:42:41.496463 kernel: Serial: AMBA PL011 UART driver Jan 14 23:42:41.496471 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 23:42:41.496479 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 23:42:41.496487 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 23:42:41.496496 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 23:42:41.496506 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 23:42:41.496514 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 23:42:41.496522 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 23:42:41.496533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 23:42:41.496541 kernel: ACPI: Added _OSI(Module Device) Jan 14 23:42:41.496551 kernel: ACPI: Added _OSI(Processor Device) Jan 14 23:42:41.496561 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 23:42:41.496570 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 23:42:41.496580 kernel: ACPI: Interpreter enabled Jan 14 23:42:41.496589 kernel: ACPI: Using GIC for interrupt routing Jan 14 23:42:41.496598 kernel: ACPI: MCFG table detected, 1 entries Jan 14 23:42:41.496606 kernel: ACPI: CPU0 has been hot-added Jan 14 23:42:41.496614 kernel: ACPI: CPU1 has been hot-added Jan 14 23:42:41.496622 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 14 23:42:41.496631 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 23:42:41.496641 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 23:42:41.497648 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 23:42:41.497758 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 23:42:41.497846 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 23:42:41.497976 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 14 23:42:41.498069 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 14 23:42:41.498087 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 14 23:42:41.498097 kernel: PCI host bridge to bus 0000:00 Jan 14 23:42:41.498205 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 14 23:42:41.498286 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 23:42:41.498366 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 14 23:42:41.498699 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 23:42:41.498831 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 14 23:42:41.498962 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 14 23:42:41.499065 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 14 23:42:41.499156 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 14 23:42:41.499259 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.499354 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 14 23:42:41.499494 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 23:42:41.499593 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:42:41.499686 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 14 23:42:41.499786 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.499930 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 14 23:42:41.500041 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 23:42:41.500132 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:42:41.500230 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.500320 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 14 23:42:41.500468 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 23:42:41.500579 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:42:41.500668 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 14 23:42:41.500767 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.500856 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 14 23:42:41.500965 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 23:42:41.501054 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:42:41.501148 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 14 23:42:41.501247 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.501337 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 14 23:42:41.501454 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 23:42:41.503238 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:42:41.503360 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 14 23:42:41.503513 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.503608 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 14 23:42:41.503696 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 23:42:41.503783 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:42:41.503883 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 14 23:42:41.503986 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.504079 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 14 23:42:41.504166 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 23:42:41.504253 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:42:41.504340 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 14 23:42:41.505474 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.505614 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 14 23:42:41.506675 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 23:42:41.506792 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:42:41.506917 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:41.507013 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 14 23:42:41.507103 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 23:42:41.507204 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:42:41.507309 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 14 23:42:41.507428 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 14 23:42:41.507550 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 23:42:41.507645 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 14 23:42:41.507736 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 14 23:42:41.507833 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 23:42:41.507954 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 23:42:41.508055 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 14 23:42:41.509083 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 14 23:42:41.509193 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 14 23:42:41.511607 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 14 23:42:41.511746 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:42:41.511840 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 14 23:42:41.511962 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:42:41.512056 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 14 23:42:41.512146 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 14 23:42:41.512257 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 14 23:42:41.512351 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 14 23:42:41.512479 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 14 23:42:41.512584 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 23:42:41.512677 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 14 23:42:41.512772 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 14 23:42:41.512987 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 23:42:41.513100 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 14 23:42:41.513192 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:42:41.513281 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:42:41.513378 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 14 23:42:41.513517 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 14 23:42:41.513613 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 14 23:42:41.513718 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 23:42:41.513809 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:42:41.513915 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:42:41.514014 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 23:42:41.514107 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 14 23:42:41.514196 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 14 23:42:41.514288 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 23:42:41.514377 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:42:41.515377 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:42:41.515538 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 23:42:41.515632 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:42:41.515721 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:42:41.515814 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 23:42:41.515924 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 14 23:42:41.516016 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 14 23:42:41.516113 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 23:42:41.516209 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:42:41.516301 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:42:41.516453 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 23:42:41.516554 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:42:41.516650 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:42:41.516743 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 14 23:42:41.517020 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 14 23:42:41.517127 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 14 23:42:41.517222 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 14 23:42:41.517339 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 14 23:42:41.517456 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 14 23:42:41.517590 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 14 23:42:41.517683 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 14 23:42:41.517773 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 14 23:42:41.517871 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 14 23:42:41.517971 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 14 23:42:41.518073 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 14 23:42:41.518166 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 14 23:42:41.518255 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 14 23:42:41.518346 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 14 23:42:41.520186 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 14 23:42:41.520301 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 14 23:42:41.520419 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 14 23:42:41.520584 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 14 23:42:41.520682 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 14 23:42:41.520776 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 14 23:42:41.520886 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 23:42:41.520988 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 14 23:42:41.521085 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 23:42:41.521178 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 14 23:42:41.521267 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 23:42:41.521369 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 14 23:42:41.521479 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 23:42:41.521573 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 14 23:42:41.521663 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 23:42:41.521754 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 14 23:42:41.521842 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 23:42:41.521985 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 14 23:42:41.522080 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 23:42:41.522171 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 14 23:42:41.522264 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 23:42:41.522355 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 14 23:42:41.522483 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 23:42:41.522586 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 14 23:42:41.522684 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 14 23:42:41.522775 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 14 23:42:41.522884 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 14 23:42:41.523062 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 23:42:41.523985 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 23:42:41.524089 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:42:41.524180 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:42:41.524282 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 14 23:42:41.524384 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 23:42:41.524505 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 23:42:41.524598 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:42:41.524686 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:42:41.524781 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 14 23:42:41.524885 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 14 23:42:41.524985 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 23:42:41.525074 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 23:42:41.525163 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:42:41.525272 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:42:41.525376 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 14 23:42:41.525493 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 23:42:41.525623 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 23:42:41.525713 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:42:41.525801 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:42:41.525947 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 14 23:42:41.526057 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 14 23:42:41.526157 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 23:42:41.526247 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 23:42:41.526335 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:42:41.526454 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:42:41.526560 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 14 23:42:41.526654 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 14 23:42:41.526750 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 23:42:41.526852 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 23:42:41.526962 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:42:41.527051 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:42:41.527150 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 14 23:42:41.527242 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 14 23:42:41.527335 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 14 23:42:41.527457 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 23:42:41.527555 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 23:42:41.527647 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:42:41.527734 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:42:41.527827 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 23:42:41.527931 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 23:42:41.528022 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:42:41.528111 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:42:41.528207 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 23:42:41.528295 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 14 23:42:41.528430 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:42:41.528525 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:42:41.528616 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 14 23:42:41.528697 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 23:42:41.528784 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 14 23:42:41.528923 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 23:42:41.529017 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 14 23:42:41.529100 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:42:41.529192 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 14 23:42:41.529280 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 14 23:42:41.529361 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:42:41.529805 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 14 23:42:41.529930 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 14 23:42:41.530017 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:42:41.530111 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 14 23:42:41.530684 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 14 23:42:41.530789 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:42:41.530939 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 14 23:42:41.531034 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 14 23:42:41.531130 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:42:41.531226 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 14 23:42:41.531311 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 14 23:42:41.531409 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:42:41.531512 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 14 23:42:41.531596 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 14 23:42:41.531678 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:42:41.531771 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 14 23:42:41.531854 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 14 23:42:41.531953 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:42:41.532104 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 14 23:42:41.532190 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 14 23:42:41.532277 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:42:41.532289 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 23:42:41.532298 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 23:42:41.532307 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 23:42:41.532316 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 23:42:41.532325 kernel: iommu: Default domain type: Translated Jan 14 23:42:41.532334 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 23:42:41.532345 kernel: efivars: Registered efivars operations Jan 14 23:42:41.532353 kernel: vgaarb: loaded Jan 14 23:42:41.532362 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 23:42:41.532371 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 23:42:41.532380 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 23:42:41.532389 kernel: pnp: PnP ACPI init Jan 14 23:42:41.532516 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 14 23:42:41.532532 kernel: pnp: PnP ACPI: found 1 devices Jan 14 23:42:41.532541 kernel: NET: Registered PF_INET protocol family Jan 14 23:42:41.532550 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 23:42:41.532559 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 23:42:41.532568 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 23:42:41.532577 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 23:42:41.532586 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 23:42:41.532596 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 23:42:41.532605 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:42:41.532614 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:42:41.532622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 23:42:41.532772 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 14 23:42:41.532788 kernel: PCI: CLS 0 bytes, default 64 Jan 14 23:42:41.532797 kernel: kvm [1]: HYP mode not available Jan 14 23:42:41.532809 kernel: Initialise system trusted keyrings Jan 14 23:42:41.532818 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 23:42:41.532827 kernel: Key type asymmetric registered Jan 14 23:42:41.532835 kernel: Asymmetric key parser 'x509' registered Jan 14 23:42:41.532844 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 23:42:41.532853 kernel: io scheduler mq-deadline registered Jan 14 23:42:41.532872 kernel: io scheduler kyber registered Jan 14 23:42:41.532884 kernel: io scheduler bfq registered Jan 14 23:42:41.532894 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 23:42:41.535507 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 14 23:42:41.535721 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 14 23:42:41.535820 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.535933 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 14 23:42:41.536027 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 14 23:42:41.536126 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.536222 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 14 23:42:41.536311 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 14 23:42:41.536418 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.536519 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 14 23:42:41.536616 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 14 23:42:41.536708 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.536803 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 14 23:42:41.536909 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 14 23:42:41.537002 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.537095 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 14 23:42:41.537185 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 14 23:42:41.537293 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.537432 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 14 23:42:41.537534 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 14 23:42:41.537623 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.537718 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 14 23:42:41.537808 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 14 23:42:41.537948 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.537969 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 14 23:42:41.538072 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 14 23:42:41.538169 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 14 23:42:41.538259 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:41.538277 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 23:42:41.538287 kernel: ACPI: button: Power Button [PWRB] Jan 14 23:42:41.538299 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 23:42:41.538431 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 14 23:42:41.538537 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 14 23:42:41.538550 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 23:42:41.538560 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 23:42:41.538652 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 14 23:42:41.538665 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 14 23:42:41.538677 kernel: thunder_xcv, ver 1.0 Jan 14 23:42:41.538686 kernel: thunder_bgx, ver 1.0 Jan 14 23:42:41.538695 kernel: nicpf, ver 1.0 Jan 14 23:42:41.538704 kernel: nicvf, ver 1.0 Jan 14 23:42:41.538814 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 23:42:41.538919 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T23:42:40 UTC (1768434160) Jan 14 23:42:41.538933 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 23:42:41.538946 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 23:42:41.538954 kernel: watchdog: NMI not fully supported Jan 14 23:42:41.538964 kernel: watchdog: Hard watchdog permanently disabled Jan 14 23:42:41.538972 kernel: NET: Registered PF_INET6 protocol family Jan 14 23:42:41.538981 kernel: Segment Routing with IPv6 Jan 14 23:42:41.538990 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 23:42:41.538999 kernel: NET: Registered PF_PACKET protocol family Jan 14 23:42:41.539010 kernel: Key type dns_resolver registered Jan 14 23:42:41.539018 kernel: registered taskstats version 1 Jan 14 23:42:41.539027 kernel: Loading compiled-in X.509 certificates Jan 14 23:42:41.539036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a690a20944211e11dad41e677dd7158a4ddc3c87' Jan 14 23:42:41.539045 kernel: Demotion targets for Node 0: null Jan 14 23:42:41.539054 kernel: Key type .fscrypt registered Jan 14 23:42:41.539063 kernel: Key type fscrypt-provisioning registered Jan 14 23:42:41.539074 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 23:42:41.539083 kernel: ima: Allocated hash algorithm: sha1 Jan 14 23:42:41.539092 kernel: ima: No architecture policies found Jan 14 23:42:41.539101 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 23:42:41.539110 kernel: clk: Disabling unused clocks Jan 14 23:42:41.539118 kernel: PM: genpd: Disabling unused power domains Jan 14 23:42:41.539128 kernel: Freeing unused kernel memory: 12416K Jan 14 23:42:41.539138 kernel: Run /init as init process Jan 14 23:42:41.539147 kernel: with arguments: Jan 14 23:42:41.539157 kernel: /init Jan 14 23:42:41.539165 kernel: with environment: Jan 14 23:42:41.539174 kernel: HOME=/ Jan 14 23:42:41.539183 kernel: TERM=linux Jan 14 23:42:41.539192 kernel: ACPI: bus type USB registered Jan 14 23:42:41.539201 kernel: usbcore: registered new interface driver usbfs Jan 14 23:42:41.539212 kernel: usbcore: registered new interface driver hub Jan 14 23:42:41.539221 kernel: usbcore: registered new device driver usb Jan 14 23:42:41.539333 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:42:41.541201 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 14 23:42:41.541339 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 23:42:41.541463 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:42:41.541565 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 14 23:42:41.541659 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 14 23:42:41.541793 kernel: hub 1-0:1.0: USB hub found Jan 14 23:42:41.541942 kernel: hub 1-0:1.0: 4 ports detected Jan 14 23:42:41.542064 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 23:42:41.542180 kernel: hub 2-0:1.0: USB hub found Jan 14 23:42:41.542283 kernel: hub 2-0:1.0: 4 ports detected Jan 14 23:42:41.542295 kernel: SCSI subsystem initialized Jan 14 23:42:41.542444 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 14 23:42:41.542644 kernel: scsi host0: Virtio SCSI HBA Jan 14 23:42:41.542768 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 23:42:41.542904 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 14 23:42:41.543011 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 14 23:42:41.543109 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 14 23:42:41.543207 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 14 23:42:41.543303 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 14 23:42:41.543421 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 14 23:42:41.543434 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 23:42:41.543444 kernel: GPT:25804799 != 80003071 Jan 14 23:42:41.543452 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 23:42:41.543461 kernel: GPT:25804799 != 80003071 Jan 14 23:42:41.543470 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 23:42:41.543478 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 23:42:41.543580 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 14 23:42:41.543679 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 14 23:42:41.543776 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 14 23:42:41.543787 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 23:42:41.543896 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 14 23:42:41.543908 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 23:42:41.543920 kernel: device-mapper: uevent: version 1.0.3 Jan 14 23:42:41.543929 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 23:42:41.543939 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 23:42:41.543948 kernel: raid6: neonx8 gen() 15719 MB/s Jan 14 23:42:41.543957 kernel: raid6: neonx4 gen() 12518 MB/s Jan 14 23:42:41.543966 kernel: raid6: neonx2 gen() 13148 MB/s Jan 14 23:42:41.543974 kernel: raid6: neonx1 gen() 10409 MB/s Jan 14 23:42:41.543984 kernel: raid6: int64x8 gen() 6792 MB/s Jan 14 23:42:41.543993 kernel: raid6: int64x4 gen() 7303 MB/s Jan 14 23:42:41.544002 kernel: raid6: int64x2 gen() 6077 MB/s Jan 14 23:42:41.544011 kernel: raid6: int64x1 gen() 3914 MB/s Jan 14 23:42:41.544138 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 23:42:41.544152 kernel: raid6: using algorithm neonx8 gen() 15719 MB/s Jan 14 23:42:41.544161 kernel: raid6: .... xor() 11956 MB/s, rmw enabled Jan 14 23:42:41.544172 kernel: raid6: using neon recovery algorithm Jan 14 23:42:41.544181 kernel: xor: measuring software checksum speed Jan 14 23:42:41.544190 kernel: 8regs : 21624 MB/sec Jan 14 23:42:41.544199 kernel: 32regs : 21693 MB/sec Jan 14 23:42:41.544207 kernel: arm64_neon : 28109 MB/sec Jan 14 23:42:41.544216 kernel: xor: using function: arm64_neon (28109 MB/sec) Jan 14 23:42:41.544225 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 23:42:41.544237 kernel: BTRFS: device fsid 78d59ed4-d19c-4fcc-8998-5f0c19b42daf devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (211) Jan 14 23:42:41.544246 kernel: BTRFS info (device dm-0): first mount of filesystem 78d59ed4-d19c-4fcc-8998-5f0c19b42daf Jan 14 23:42:41.544255 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:41.544264 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 23:42:41.544272 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 23:42:41.544281 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 23:42:41.544290 kernel: loop: module loaded Jan 14 23:42:41.544300 kernel: loop0: detected capacity change from 0 to 91488 Jan 14 23:42:41.544309 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 23:42:41.545886 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 14 23:42:41.545918 systemd[1]: Successfully made /usr/ read-only. Jan 14 23:42:41.545931 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:42:41.545948 systemd[1]: Detected virtualization kvm. Jan 14 23:42:41.545959 systemd[1]: Detected architecture arm64. Jan 14 23:42:41.545968 systemd[1]: Running in initrd. Jan 14 23:42:41.545977 systemd[1]: No hostname configured, using default hostname. Jan 14 23:42:41.545988 systemd[1]: Hostname set to . Jan 14 23:42:41.545997 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:42:41.546007 systemd[1]: Queued start job for default target initrd.target. Jan 14 23:42:41.546018 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:42:41.546027 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:41.546037 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:41.546047 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 23:42:41.546057 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:42:41.546067 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 23:42:41.546079 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 23:42:41.546088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:41.546098 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:41.546108 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:42:41.546117 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:42:41.546127 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:42:41.546138 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:42:41.546147 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:42:41.546157 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:42:41.546166 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:42:41.546176 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:41.546185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 23:42:41.546194 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 23:42:41.546205 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:41.546215 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:41.546224 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:41.546234 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:42:41.546243 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 23:42:41.546253 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 23:42:41.546263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:42:41.546274 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 23:42:41.546283 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 23:42:41.546293 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 23:42:41.546302 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:42:41.546312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:42:41.546323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:41.546334 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 23:42:41.546343 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:41.546352 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 23:42:41.546361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:42:41.546428 systemd-journald[348]: Collecting audit messages is enabled. Jan 14 23:42:41.546453 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 23:42:41.546463 kernel: Bridge firewalling registered Jan 14 23:42:41.546475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:41.546485 kernel: audit: type=1130 audit(1768434161.504:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.546494 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:42:41.546504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:41.546520 kernel: audit: type=1130 audit(1768434161.530:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.546529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:41.546541 kernel: audit: type=1130 audit(1768434161.534:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.546550 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 23:42:41.546560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:42:41.546570 systemd-journald[348]: Journal started Jan 14 23:42:41.546591 systemd-journald[348]: Runtime Journal (/run/log/journal/20f0b849eacd4abe8c61591df5cccf11) is 8M, max 76.5M, 68.5M free. Jan 14 23:42:41.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.496488 systemd-modules-load[349]: Inserted module 'br_netfilter' Jan 14 23:42:41.549835 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:42:41.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.554066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:41.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.557609 kernel: audit: type=1130 audit(1768434161.551:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.557661 kernel: audit: type=1130 audit(1768434161.555:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.560000 audit: BPF prog-id=6 op=LOAD Jan 14 23:42:41.561421 kernel: audit: type=1334 audit(1768434161.560:7): prog-id=6 op=LOAD Jan 14 23:42:41.563718 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:42:41.571756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:42:41.577550 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:41.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.582508 kernel: audit: type=1130 audit(1768434161.578:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.589069 systemd-tmpfiles[375]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 23:42:41.596523 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:41.600279 kernel: audit: type=1130 audit(1768434161.596:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.599767 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:42:41.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.604416 kernel: audit: type=1130 audit(1768434161.600:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.606610 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 23:42:41.633940 systemd-resolved[374]: Positive Trust Anchors: Jan 14 23:42:41.633959 systemd-resolved[374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:42:41.633962 systemd-resolved[374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:42:41.633994 systemd-resolved[374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:42:41.648088 dracut-cmdline[390]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:42:41.668759 systemd-resolved[374]: Defaulting to hostname 'linux'. Jan 14 23:42:41.670588 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:42:41.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.672344 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:41.754466 kernel: Loading iSCSI transport class v2.0-870. Jan 14 23:42:41.765501 kernel: iscsi: registered transport (tcp) Jan 14 23:42:41.782433 kernel: iscsi: registered transport (qla4xxx) Jan 14 23:42:41.782497 kernel: QLogic iSCSI HBA Driver Jan 14 23:42:41.813959 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:42:41.840643 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:41.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.843369 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:42:41.894873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 23:42:41.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.897775 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 23:42:41.902095 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 23:42:41.946968 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:42:41.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.950000 audit: BPF prog-id=7 op=LOAD Jan 14 23:42:41.950000 audit: BPF prog-id=8 op=LOAD Jan 14 23:42:41.952491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:41.987895 systemd-udevd[625]: Using default interface naming scheme 'v257'. Jan 14 23:42:41.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.997270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:42.003313 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 23:42:42.031265 dracut-pre-trigger[678]: rd.md=0: removing MD RAID activation Jan 14 23:42:42.059212 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:42:42.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.063000 audit: BPF prog-id=9 op=LOAD Jan 14 23:42:42.064557 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:42:42.080170 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:42:42.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.083683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:42:42.115631 systemd-networkd[753]: lo: Link UP Jan 14 23:42:42.115645 systemd-networkd[753]: lo: Gained carrier Jan 14 23:42:42.116382 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:42:42.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.117519 systemd[1]: Reached target network.target - Network. Jan 14 23:42:42.162476 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:42.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.166487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 23:42:42.327387 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 14 23:42:42.338264 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 14 23:42:42.348808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 14 23:42:42.358099 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 23:42:42.369421 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 14 23:42:42.379209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 23:42:42.383209 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 14 23:42:42.389811 disk-uuid[807]: Primary Header is updated. Jan 14 23:42:42.389811 disk-uuid[807]: Secondary Entries is updated. Jan 14 23:42:42.389811 disk-uuid[807]: Secondary Header is updated. Jan 14 23:42:42.396807 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 14 23:42:42.397883 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:42.398019 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:42.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.402270 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:42.404961 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:42.438212 systemd-networkd[753]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:42.438227 systemd-networkd[753]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:42.441376 systemd-networkd[753]: eth1: Link UP Jan 14 23:42:42.441959 systemd-networkd[753]: eth1: Gained carrier Jan 14 23:42:42.441978 systemd-networkd[753]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:42.451371 systemd-networkd[753]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:42.453386 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:42.455901 systemd-networkd[753]: eth0: Link UP Jan 14 23:42:42.456110 systemd-networkd[753]: eth0: Gained carrier Jan 14 23:42:42.456125 systemd-networkd[753]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:42.471474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:42.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.484417 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 14 23:42:42.484639 kernel: usbcore: registered new interface driver usbhid Jan 14 23:42:42.484652 kernel: usbhid: USB HID core driver Jan 14 23:42:42.490473 systemd-networkd[753]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 23:42:42.515670 systemd-networkd[753]: eth0: DHCPv4 address 46.224.65.210/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 23:42:42.521532 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 23:42:42.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.523307 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:42:42.525130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:42.525989 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:42:42.528504 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 23:42:42.566913 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:42:42.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.436410 disk-uuid[808]: Warning: The kernel is still using the old partition table. Jan 14 23:42:43.436410 disk-uuid[808]: The new table will be used at the next reboot or after you Jan 14 23:42:43.436410 disk-uuid[808]: run partprobe(8) or kpartx(8) Jan 14 23:42:43.436410 disk-uuid[808]: The operation has completed successfully. Jan 14 23:42:43.447922 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 23:42:43.449456 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 23:42:43.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.451919 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 23:42:43.489458 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (842) Jan 14 23:42:43.491482 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:43.491556 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:43.495544 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:43.495638 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:43.495678 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:43.502430 kernel: BTRFS info (device sda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:43.505615 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 23:42:43.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.508887 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 23:42:43.648612 ignition[861]: Ignition 2.22.0 Jan 14 23:42:43.649276 ignition[861]: Stage: fetch-offline Jan 14 23:42:43.649326 ignition[861]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.649338 ignition[861]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.649529 ignition[861]: parsed url from cmdline: "" Jan 14 23:42:43.649533 ignition[861]: no config URL provided Jan 14 23:42:43.649538 ignition[861]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:42:43.653505 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:42:43.649548 ignition[861]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:42:43.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.649553 ignition[861]: failed to fetch config: resource requires networking Jan 14 23:42:43.649798 ignition[861]: Ignition finished successfully Jan 14 23:42:43.658571 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 23:42:43.704341 ignition[869]: Ignition 2.22.0 Jan 14 23:42:43.705038 ignition[869]: Stage: fetch Jan 14 23:42:43.705227 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.705236 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.705335 ignition[869]: parsed url from cmdline: "" Jan 14 23:42:43.705338 ignition[869]: no config URL provided Jan 14 23:42:43.705343 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:42:43.705349 ignition[869]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:42:43.705385 ignition[869]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 14 23:42:43.714519 ignition[869]: GET result: OK Jan 14 23:42:43.714745 ignition[869]: parsing config with SHA512: 30cefec57a550aca832d202e2a4b3751673242770d168bbd7beeafe2d55ec68ea913f9d547800b08efd5cf5d1cb11d5fa762dcc931cd431fdb178f9650dddeef Jan 14 23:42:43.722943 unknown[869]: fetched base config from "system" Jan 14 23:42:43.722958 unknown[869]: fetched base config from "system" Jan 14 23:42:43.723328 ignition[869]: fetch: fetch complete Jan 14 23:42:43.722963 unknown[869]: fetched user config from "hetzner" Jan 14 23:42:43.723334 ignition[869]: fetch: fetch passed Jan 14 23:42:43.723379 ignition[869]: Ignition finished successfully Jan 14 23:42:43.727191 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 23:42:43.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.732543 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 23:42:43.742520 systemd-networkd[753]: eth0: Gained IPv6LL Jan 14 23:42:43.766371 ignition[875]: Ignition 2.22.0 Jan 14 23:42:43.766408 ignition[875]: Stage: kargs Jan 14 23:42:43.766567 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.766576 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.767462 ignition[875]: kargs: kargs passed Jan 14 23:42:43.769551 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 23:42:43.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.767510 ignition[875]: Ignition finished successfully Jan 14 23:42:43.773047 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 23:42:43.805060 ignition[881]: Ignition 2.22.0 Jan 14 23:42:43.805744 ignition[881]: Stage: disks Jan 14 23:42:43.805956 ignition[881]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.805965 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.808719 ignition[881]: disks: disks passed Jan 14 23:42:43.809226 ignition[881]: Ignition finished successfully Jan 14 23:42:43.811629 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 23:42:43.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.813200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 23:42:43.814743 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 23:42:43.816716 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:42:43.817761 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:42:43.819593 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:42:43.821863 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 23:42:43.869617 systemd-fsck[890]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 23:42:43.874299 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 23:42:43.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.878890 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 23:42:43.965076 kernel: EXT4-fs (sda9): mounted filesystem 05dab3f9-40c2-46d9-a2a2-3da8ed7c4451 r/w with ordered data mode. Quota mode: none. Jan 14 23:42:43.964014 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 23:42:43.967285 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 23:42:43.972014 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:42:43.973870 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 23:42:43.981293 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 23:42:43.983557 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 23:42:43.983604 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:42:43.993800 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 23:42:43.997193 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 23:42:44.008431 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (898) Jan 14 23:42:44.013976 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:44.014056 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:44.031471 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:44.031565 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:44.033440 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:44.037210 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:42:44.058582 coreos-metadata[900]: Jan 14 23:42:44.058 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 14 23:42:44.060625 coreos-metadata[900]: Jan 14 23:42:44.059 INFO Fetch successful Jan 14 23:42:44.062563 coreos-metadata[900]: Jan 14 23:42:44.061 INFO wrote hostname ci-4515-1-0-n-ec6f9a8ce8 to /sysroot/etc/hostname Jan 14 23:42:44.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.065457 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 23:42:44.072159 initrd-setup-root[926]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 23:42:44.078484 initrd-setup-root[933]: cut: /sysroot/etc/group: No such file or directory Jan 14 23:42:44.085048 initrd-setup-root[940]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 23:42:44.091052 initrd-setup-root[947]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 23:42:44.125597 systemd-networkd[753]: eth1: Gained IPv6LL Jan 14 23:42:44.205946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 23:42:44.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.210882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 23:42:44.213241 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 23:42:44.232434 kernel: BTRFS info (device sda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:44.254794 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 23:42:44.258818 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 14 23:42:44.258851 kernel: audit: type=1130 audit(1768434164.255:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.270807 ignition[1016]: INFO : Ignition 2.22.0 Jan 14 23:42:44.270807 ignition[1016]: INFO : Stage: mount Jan 14 23:42:44.272082 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:44.272082 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:44.272082 ignition[1016]: INFO : mount: mount passed Jan 14 23:42:44.276222 ignition[1016]: INFO : Ignition finished successfully Jan 14 23:42:44.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.274833 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 23:42:44.279382 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 23:42:44.282596 kernel: audit: type=1130 audit(1768434164.275:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:44.476422 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 23:42:44.482770 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:42:44.507452 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1026) Jan 14 23:42:44.509503 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:44.509574 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:44.513500 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:44.513579 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:44.513613 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:44.516761 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:42:44.553418 ignition[1043]: INFO : Ignition 2.22.0 Jan 14 23:42:44.553418 ignition[1043]: INFO : Stage: files Jan 14 23:42:44.553418 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:44.553418 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:44.557252 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Jan 14 23:42:44.558617 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 23:42:44.558617 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 23:42:44.564886 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 23:42:44.566373 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 23:42:44.568846 unknown[1043]: wrote ssh authorized keys file for user: core Jan 14 23:42:44.570648 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 23:42:44.572680 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 23:42:44.574205 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 14 23:42:44.662526 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 23:42:44.743989 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 23:42:44.743989 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:42:44.746520 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 23:42:44.757560 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 14 23:42:45.065831 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 23:42:45.540029 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 14 23:42:45.540029 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 23:42:45.544424 ignition[1043]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 14 23:42:45.556890 kernel: audit: type=1130 audit(1768434165.547:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.556970 ignition[1043]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 23:42:45.556970 ignition[1043]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:42:45.556970 ignition[1043]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:42:45.556970 ignition[1043]: INFO : files: files passed Jan 14 23:42:45.556970 ignition[1043]: INFO : Ignition finished successfully Jan 14 23:42:45.546834 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 23:42:45.549118 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 23:42:45.552708 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 23:42:45.568658 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 23:42:45.569557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 23:42:45.575426 kernel: audit: type=1130 audit(1768434165.571:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.575483 kernel: audit: type=1131 audit(1768434165.571:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.583723 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:45.585746 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:45.587077 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:45.589433 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:42:45.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.591491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 23:42:45.597574 kernel: audit: type=1130 audit(1768434165.590:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.596141 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 23:42:45.658602 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 23:42:45.659748 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 23:42:45.667589 kernel: audit: type=1130 audit(1768434165.662:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.667620 kernel: audit: type=1131 audit(1768434165.662:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.662718 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 23:42:45.667757 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 23:42:45.669147 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 23:42:45.670033 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 23:42:45.702424 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:42:45.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.705685 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 23:42:45.711458 kernel: audit: type=1130 audit(1768434165.703:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.736441 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:42:45.736598 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:45.738989 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:45.739957 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 23:42:45.741958 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 23:42:45.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.742094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:42:45.747600 kernel: audit: type=1131 audit(1768434165.743:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.746165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 23:42:45.746968 systemd[1]: Stopped target basic.target - Basic System. Jan 14 23:42:45.748197 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 23:42:45.749382 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:42:45.750557 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 23:42:45.751892 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:42:45.753094 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 23:42:45.754246 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:42:45.755605 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 23:42:45.756732 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 23:42:45.757994 systemd[1]: Stopped target swap.target - Swaps. Jan 14 23:42:45.759013 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 23:42:45.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.759151 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:42:45.760584 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:45.761290 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:45.762478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 23:42:45.765448 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:45.766958 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 23:42:45.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.767217 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 23:42:45.769654 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 23:42:45.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.769915 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:42:45.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.771468 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 23:42:45.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.771668 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 23:42:45.773368 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 23:42:45.773583 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 23:42:45.776102 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 23:42:45.781352 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 23:42:45.781947 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 23:42:45.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.782114 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:45.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.785764 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 23:42:45.785949 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:45.787250 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 23:42:45.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.787355 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:42:45.799832 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 23:42:45.801500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 23:42:45.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.814587 ignition[1098]: INFO : Ignition 2.22.0 Jan 14 23:42:45.814587 ignition[1098]: INFO : Stage: umount Jan 14 23:42:45.818114 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:45.818114 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:45.818114 ignition[1098]: INFO : umount: umount passed Jan 14 23:42:45.818114 ignition[1098]: INFO : Ignition finished successfully Jan 14 23:42:45.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.819566 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 23:42:45.819759 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 23:42:45.823037 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 23:42:45.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.823137 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 23:42:45.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.826642 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 23:42:45.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.826713 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 23:42:45.828441 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 23:42:45.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.828499 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 23:42:45.834298 systemd[1]: Stopped target network.target - Network. Jan 14 23:42:45.836148 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 23:42:45.836221 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:42:45.837763 systemd[1]: Stopped target paths.target - Path Units. Jan 14 23:42:45.838961 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 23:42:45.843060 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:45.847587 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 23:42:45.852472 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 23:42:45.854498 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 23:42:45.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.854551 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:42:45.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.856344 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 23:42:45.856389 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:42:45.859283 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 23:42:45.859311 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:45.861010 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 23:42:45.861077 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 23:42:45.862995 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 23:42:45.863049 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 23:42:45.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.864334 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 23:42:45.869671 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 23:42:45.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.876144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 23:42:45.878688 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 23:42:45.878817 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 23:42:45.880737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 23:42:45.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.880871 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 23:42:45.884388 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 23:42:45.885582 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 23:42:45.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.888330 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 23:42:45.888485 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 23:42:45.892373 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 23:42:45.891000 audit: BPF prog-id=6 op=UNLOAD Jan 14 23:42:45.893000 audit: BPF prog-id=9 op=UNLOAD Jan 14 23:42:45.893166 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 23:42:45.893212 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:45.896576 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 23:42:45.897138 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 23:42:45.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.897215 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:42:45.900515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 23:42:45.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.900622 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:45.902994 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 23:42:45.903074 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:45.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.905551 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:45.928567 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 23:42:45.930477 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:45.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.932606 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 23:42:45.933275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:45.934903 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 23:42:45.934945 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:45.937033 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 23:42:45.937106 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:42:45.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.940546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 23:42:45.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.940624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 23:42:45.941696 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 23:42:45.941749 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:42:45.944532 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 23:42:45.950023 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 23:42:45.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.950102 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:45.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.953524 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 23:42:45.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.953586 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:45.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.955193 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 23:42:45.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.955242 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:45.957229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 23:42:45.957279 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:45.960333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:45.960446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:45.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.966741 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 23:42:45.967535 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 23:42:45.970975 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 23:42:45.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:45.971102 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 23:42:45.972153 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 23:42:45.974105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 23:42:45.998046 systemd[1]: Switching root. Jan 14 23:42:46.045743 systemd-journald[348]: Journal stopped Jan 14 23:42:47.057948 systemd-journald[348]: Received SIGTERM from PID 1 (systemd). Jan 14 23:42:47.058031 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 23:42:47.058049 kernel: SELinux: policy capability open_perms=1 Jan 14 23:42:47.058060 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 23:42:47.058074 kernel: SELinux: policy capability always_check_network=0 Jan 14 23:42:47.058092 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 23:42:47.058103 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 23:42:47.058113 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 23:42:47.058123 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 23:42:47.058133 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 23:42:47.058144 systemd[1]: Successfully loaded SELinux policy in 74.773ms. Jan 14 23:42:47.058170 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.492ms. Jan 14 23:42:47.058182 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:42:47.058194 systemd[1]: Detected virtualization kvm. Jan 14 23:42:47.058205 systemd[1]: Detected architecture arm64. Jan 14 23:42:47.058217 systemd[1]: Detected first boot. Jan 14 23:42:47.058228 systemd[1]: Hostname set to . Jan 14 23:42:47.058239 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:42:47.058251 zram_generator::config[1142]: No configuration found. Jan 14 23:42:47.058266 kernel: NET: Registered PF_VSOCK protocol family Jan 14 23:42:47.058277 systemd[1]: Populated /etc with preset unit settings. Jan 14 23:42:47.058288 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 23:42:47.058300 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 23:42:47.058311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 23:42:47.058323 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 23:42:47.058336 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 23:42:47.058350 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 23:42:47.058362 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 23:42:47.058373 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 23:42:47.058384 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 23:42:47.059452 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 23:42:47.059486 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 23:42:47.059498 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:47.059511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:47.059523 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 23:42:47.059534 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 23:42:47.059545 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 23:42:47.059556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:42:47.059567 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 23:42:47.059585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:47.059597 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:47.059609 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 23:42:47.059620 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 23:42:47.059633 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 23:42:47.059644 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 23:42:47.059655 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:47.059667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:42:47.059678 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 23:42:47.059689 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:42:47.059701 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:42:47.059712 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 23:42:47.059725 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 23:42:47.059736 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 23:42:47.059768 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:47.059780 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 23:42:47.059792 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:47.059805 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 23:42:47.059816 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 23:42:47.059830 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:47.059841 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:47.059852 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 23:42:47.059863 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 23:42:47.059874 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 23:42:47.059887 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 23:42:47.059902 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 23:42:47.059915 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 23:42:47.059926 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 23:42:47.059937 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 23:42:47.059948 systemd[1]: Reached target machines.target - Containers. Jan 14 23:42:47.059960 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 23:42:47.059971 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:47.059986 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:42:47.059997 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 23:42:47.060008 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:47.060019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:42:47.060031 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:47.060041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 23:42:47.060054 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:47.060067 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 23:42:47.060078 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 23:42:47.060090 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 23:42:47.060101 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 23:42:47.060118 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 23:42:47.060133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:47.060144 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:42:47.060156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:42:47.060167 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:42:47.060178 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 23:42:47.060192 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 23:42:47.060204 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:42:47.060215 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 23:42:47.060226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 23:42:47.060237 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 23:42:47.060249 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 23:42:47.060261 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 23:42:47.060274 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 23:42:47.060286 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:47.060296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 23:42:47.060307 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 23:42:47.060321 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:47.060332 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:47.060343 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:47.060355 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:47.060368 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:47.060379 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:47.060390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:47.060435 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 23:42:47.060450 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 23:42:47.060463 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 23:42:47.060486 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 23:42:47.060499 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:42:47.060510 kernel: fuse: init (API version 7.41) Jan 14 23:42:47.060523 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 23:42:47.060536 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:47.060547 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:47.060559 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 23:42:47.060571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:47.060582 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 23:42:47.060595 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:47.060647 systemd-journald[1211]: Collecting audit messages is enabled. Jan 14 23:42:47.060677 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:42:47.060690 systemd-journald[1211]: Journal started Jan 14 23:42:47.060716 systemd-journald[1211]: Runtime Journal (/run/log/journal/20f0b849eacd4abe8c61591df5cccf11) is 8M, max 76.5M, 68.5M free. Jan 14 23:42:46.805000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 23:42:47.065190 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 23:42:46.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.925000 audit: BPF prog-id=14 op=UNLOAD Jan 14 23:42:46.925000 audit: BPF prog-id=13 op=UNLOAD Jan 14 23:42:46.926000 audit: BPF prog-id=15 op=LOAD Jan 14 23:42:46.926000 audit: BPF prog-id=16 op=LOAD Jan 14 23:42:46.926000 audit: BPF prog-id=17 op=LOAD Jan 14 23:42:46.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.051000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 23:42:47.051000 audit[1211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff6fadd40 a2=4000 a3=0 items=0 ppid=1 pid=1211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:42:47.051000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 23:42:46.740014 systemd[1]: Queued start job for default target multi-user.target. Jan 14 23:42:46.747378 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 23:42:46.748197 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 23:42:47.080430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:42:47.080522 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:42:47.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.076488 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 23:42:47.086170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 23:42:47.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.098502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:47.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.100925 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 23:42:47.103115 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 23:42:47.116871 kernel: loop1: detected capacity change from 0 to 200800 Jan 14 23:42:47.116962 kernel: ACPI: bus type drm_connector registered Jan 14 23:42:47.116423 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 23:42:47.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.123999 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:42:47.126274 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:42:47.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.133290 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 23:42:47.135919 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:42:47.141634 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 23:42:47.151731 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 23:42:47.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.172096 systemd-journald[1211]: Time spent on flushing to /var/log/journal/20f0b849eacd4abe8c61591df5cccf11 is 60.824ms for 1299 entries. Jan 14 23:42:47.172096 systemd-journald[1211]: System Journal (/var/log/journal/20f0b849eacd4abe8c61591df5cccf11) is 8M, max 588.1M, 580.1M free. Jan 14 23:42:47.249599 systemd-journald[1211]: Received client request to flush runtime journal. Jan 14 23:42:47.249669 kernel: loop2: detected capacity change from 0 to 100192 Jan 14 23:42:47.249694 kernel: loop3: detected capacity change from 0 to 8 Jan 14 23:42:47.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.168869 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 23:42:47.175321 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:47.186636 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 14 23:42:47.186657 systemd-tmpfiles[1236]: ACLs are not supported, ignoring. Jan 14 23:42:47.199019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:47.202941 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:47.211600 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 23:42:47.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.259575 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 23:42:47.266524 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 23:42:47.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.277440 kernel: loop4: detected capacity change from 0 to 109872 Jan 14 23:42:47.289534 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 23:42:47.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.291000 audit: BPF prog-id=18 op=LOAD Jan 14 23:42:47.291000 audit: BPF prog-id=19 op=LOAD Jan 14 23:42:47.291000 audit: BPF prog-id=20 op=LOAD Jan 14 23:42:47.293620 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 23:42:47.294000 audit: BPF prog-id=21 op=LOAD Jan 14 23:42:47.297522 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:42:47.299633 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:42:47.311323 kernel: loop5: detected capacity change from 0 to 200800 Jan 14 23:42:47.314000 audit: BPF prog-id=22 op=LOAD Jan 14 23:42:47.314000 audit: BPF prog-id=23 op=LOAD Jan 14 23:42:47.314000 audit: BPF prog-id=24 op=LOAD Jan 14 23:42:47.317667 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 23:42:47.319000 audit: BPF prog-id=25 op=LOAD Jan 14 23:42:47.319000 audit: BPF prog-id=26 op=LOAD Jan 14 23:42:47.319000 audit: BPF prog-id=27 op=LOAD Jan 14 23:42:47.320954 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 23:42:47.345594 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 23:42:47.345616 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Jan 14 23:42:47.347431 kernel: loop6: detected capacity change from 0 to 100192 Jan 14 23:42:47.355667 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:47.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.369430 kernel: loop7: detected capacity change from 0 to 8 Jan 14 23:42:47.374421 kernel: loop1: detected capacity change from 0 to 109872 Jan 14 23:42:47.392528 (sd-merge)[1288]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 14 23:42:47.398929 (sd-merge)[1288]: Merged extensions into '/usr'. Jan 14 23:42:47.406308 systemd[1]: Reload requested from client PID 1235 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 23:42:47.406331 systemd[1]: Reloading... Jan 14 23:42:47.428773 systemd-nsresourced[1290]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 23:42:47.544429 zram_generator::config[1336]: No configuration found. Jan 14 23:42:47.602695 systemd-oomd[1285]: No swap; memory pressure usage will be degraded Jan 14 23:42:47.624859 systemd-resolved[1286]: Positive Trust Anchors: Jan 14 23:42:47.624881 systemd-resolved[1286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:42:47.624885 systemd-resolved[1286]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:42:47.624917 systemd-resolved[1286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:42:47.635692 systemd-resolved[1286]: Using system hostname 'ci-4515-1-0-n-ec6f9a8ce8'. Jan 14 23:42:47.752366 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 23:42:47.752677 systemd[1]: Reloading finished in 345 ms. Jan 14 23:42:47.777538 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 23:42:47.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.778501 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 23:42:47.779459 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 23:42:47.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.780374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:42:47.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.781354 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 23:42:47.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.782608 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 23:42:47.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.786211 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:47.788529 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 23:42:47.796771 systemd[1]: Starting ensure-sysext.service... Jan 14 23:42:47.805000 audit: BPF prog-id=8 op=UNLOAD Jan 14 23:42:47.805000 audit: BPF prog-id=7 op=UNLOAD Jan 14 23:42:47.805000 audit: BPF prog-id=28 op=LOAD Jan 14 23:42:47.803699 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:42:47.809000 audit: BPF prog-id=29 op=LOAD Jan 14 23:42:47.810506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:47.813000 audit: BPF prog-id=30 op=LOAD Jan 14 23:42:47.815000 audit: BPF prog-id=18 op=UNLOAD Jan 14 23:42:47.815000 audit: BPF prog-id=31 op=LOAD Jan 14 23:42:47.815000 audit: BPF prog-id=32 op=LOAD Jan 14 23:42:47.815000 audit: BPF prog-id=19 op=UNLOAD Jan 14 23:42:47.815000 audit: BPF prog-id=20 op=UNLOAD Jan 14 23:42:47.816000 audit: BPF prog-id=33 op=LOAD Jan 14 23:42:47.816000 audit: BPF prog-id=25 op=UNLOAD Jan 14 23:42:47.816000 audit: BPF prog-id=34 op=LOAD Jan 14 23:42:47.816000 audit: BPF prog-id=35 op=LOAD Jan 14 23:42:47.816000 audit: BPF prog-id=26 op=UNLOAD Jan 14 23:42:47.816000 audit: BPF prog-id=27 op=UNLOAD Jan 14 23:42:47.816000 audit: BPF prog-id=36 op=LOAD Jan 14 23:42:47.817000 audit: BPF prog-id=22 op=UNLOAD Jan 14 23:42:47.817000 audit: BPF prog-id=37 op=LOAD Jan 14 23:42:47.818000 audit: BPF prog-id=38 op=LOAD Jan 14 23:42:47.818000 audit: BPF prog-id=23 op=UNLOAD Jan 14 23:42:47.818000 audit: BPF prog-id=24 op=UNLOAD Jan 14 23:42:47.819000 audit: BPF prog-id=39 op=LOAD Jan 14 23:42:47.820000 audit: BPF prog-id=21 op=UNLOAD Jan 14 23:42:47.820000 audit: BPF prog-id=40 op=LOAD Jan 14 23:42:47.820000 audit: BPF prog-id=15 op=UNLOAD Jan 14 23:42:47.820000 audit: BPF prog-id=41 op=LOAD Jan 14 23:42:47.820000 audit: BPF prog-id=42 op=LOAD Jan 14 23:42:47.820000 audit: BPF prog-id=16 op=UNLOAD Jan 14 23:42:47.820000 audit: BPF prog-id=17 op=UNLOAD Jan 14 23:42:47.824742 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 23:42:47.828805 systemd[1]: Reload requested from client PID 1372 ('systemctl') (unit ensure-sysext.service)... Jan 14 23:42:47.828930 systemd[1]: Reloading... Jan 14 23:42:47.852091 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 23:42:47.852125 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 23:42:47.852590 systemd-udevd[1374]: Using default interface naming scheme 'v257'. Jan 14 23:42:47.852949 systemd-tmpfiles[1373]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 23:42:47.854046 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 14 23:42:47.854093 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 14 23:42:47.866335 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:42:47.866528 systemd-tmpfiles[1373]: Skipping /boot Jan 14 23:42:47.879084 systemd-tmpfiles[1373]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:42:47.881586 systemd-tmpfiles[1373]: Skipping /boot Jan 14 23:42:47.972949 zram_generator::config[1430]: No configuration found. Jan 14 23:42:48.137427 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 23:42:48.188181 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 23:42:48.188567 systemd[1]: Reloading finished in 359 ms. Jan 14 23:42:48.203069 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:48.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.207013 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:48.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.232000 audit: BPF prog-id=43 op=LOAD Jan 14 23:42:48.232000 audit: BPF prog-id=33 op=UNLOAD Jan 14 23:42:48.233000 audit: BPF prog-id=44 op=LOAD Jan 14 23:42:48.233000 audit: BPF prog-id=45 op=LOAD Jan 14 23:42:48.233000 audit: BPF prog-id=34 op=UNLOAD Jan 14 23:42:48.233000 audit: BPF prog-id=35 op=UNLOAD Jan 14 23:42:48.235000 audit: BPF prog-id=46 op=LOAD Jan 14 23:42:48.235000 audit: BPF prog-id=39 op=UNLOAD Jan 14 23:42:48.236000 audit: BPF prog-id=47 op=LOAD Jan 14 23:42:48.236000 audit: BPF prog-id=40 op=UNLOAD Jan 14 23:42:48.236000 audit: BPF prog-id=48 op=LOAD Jan 14 23:42:48.236000 audit: BPF prog-id=49 op=LOAD Jan 14 23:42:48.236000 audit: BPF prog-id=41 op=UNLOAD Jan 14 23:42:48.236000 audit: BPF prog-id=42 op=UNLOAD Jan 14 23:42:48.237000 audit: BPF prog-id=50 op=LOAD Jan 14 23:42:48.237000 audit: BPF prog-id=51 op=LOAD Jan 14 23:42:48.237000 audit: BPF prog-id=28 op=UNLOAD Jan 14 23:42:48.237000 audit: BPF prog-id=29 op=UNLOAD Jan 14 23:42:48.237000 audit: BPF prog-id=52 op=LOAD Jan 14 23:42:48.237000 audit: BPF prog-id=30 op=UNLOAD Jan 14 23:42:48.237000 audit: BPF prog-id=53 op=LOAD Jan 14 23:42:48.237000 audit: BPF prog-id=54 op=LOAD Jan 14 23:42:48.237000 audit: BPF prog-id=31 op=UNLOAD Jan 14 23:42:48.237000 audit: BPF prog-id=32 op=UNLOAD Jan 14 23:42:48.238000 audit: BPF prog-id=55 op=LOAD Jan 14 23:42:48.238000 audit: BPF prog-id=36 op=UNLOAD Jan 14 23:42:48.238000 audit: BPF prog-id=56 op=LOAD Jan 14 23:42:48.238000 audit: BPF prog-id=57 op=LOAD Jan 14 23:42:48.238000 audit: BPF prog-id=37 op=UNLOAD Jan 14 23:42:48.238000 audit: BPF prog-id=38 op=UNLOAD Jan 14 23:42:48.250359 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 14 23:42:48.261664 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:42:48.263640 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 23:42:48.264531 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.267821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:48.270223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:48.276181 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:48.277079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.277202 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.281478 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 23:42:48.282137 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.304500 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 23:42:48.307000 audit: BPF prog-id=58 op=LOAD Jan 14 23:42:48.308861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:42:48.316704 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 23:42:48.321065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:48.321318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:48.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.324932 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:48.325216 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:48.331340 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:48.342007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:48.342259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:48.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.349461 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:48.354914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.358255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:48.364806 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:48.381613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:48.383577 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.383864 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.383968 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.394170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.395459 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.395707 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.395900 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.401234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.402000 audit[1504]: SYSTEM_BOOT pid=1504 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.410478 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:42:48.411606 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.411915 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.412057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.419861 systemd[1]: Finished ensure-sysext.service. Jan 14 23:42:48.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.439000 audit: BPF prog-id=59 op=LOAD Jan 14 23:42:48.442703 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 23:42:48.445471 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 23:42:48.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.450061 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:42:48.450869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:42:48.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.468674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:48.478869 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:48.480746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:48.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.484003 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:48.489326 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 14 23:42:48.489432 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 23:42:48.489453 kernel: [drm] features: -context_init Jan 14 23:42:48.495163 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:48.495375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:48.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.497832 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:48.499419 kernel: [drm] number of scanouts: 1 Jan 14 23:42:48.499498 kernel: [drm] number of cap sets: 0 Jan 14 23:42:48.499645 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:48.501416 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 14 23:42:48.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.508528 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 23:42:48.512586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:48.515282 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 23:42:48.565000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.577000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 23:42:48.577000 audit[1547]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff8c8ec30 a2=420 a3=0 items=0 ppid=1493 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:42:48.577000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:42:48.577903 augenrules[1547]: No rules Jan 14 23:42:48.584817 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:42:48.587480 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:42:48.598608 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 23:42:48.619694 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 23:42:48.623255 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 23:42:48.675740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:48.716452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:48.716883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:48.719789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:48.725572 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:48.752718 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 23:42:48.758426 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 23:42:48.832983 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 23:42:48.847990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:48.872886 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 23:42:48.873887 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 23:42:48.890726 systemd-networkd[1503]: lo: Link UP Jan 14 23:42:48.891097 systemd-networkd[1503]: lo: Gained carrier Jan 14 23:42:48.893166 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:48.893338 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:42:48.893772 systemd-networkd[1503]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:48.894488 systemd[1]: Reached target network.target - Network. Jan 14 23:42:48.895019 systemd-networkd[1503]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:48.895586 systemd-networkd[1503]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:48.896846 systemd-networkd[1503]: eth0: Link UP Jan 14 23:42:48.897013 systemd-networkd[1503]: eth0: Gained carrier Jan 14 23:42:48.897032 systemd-networkd[1503]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:48.897525 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 23:42:48.900775 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 23:42:48.903505 systemd-networkd[1503]: eth1: Link UP Jan 14 23:42:48.906978 systemd-networkd[1503]: eth1: Gained carrier Jan 14 23:42:48.907011 systemd-networkd[1503]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:48.928713 ldconfig[1499]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 23:42:48.934211 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 23:42:48.935898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 23:42:48.939570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 23:42:48.946565 systemd-networkd[1503]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 23:42:48.947500 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 14 23:42:48.957524 systemd-networkd[1503]: eth0: DHCPv4 address 46.224.65.210/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 23:42:48.958187 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 14 23:42:48.961530 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 14 23:42:48.963571 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 23:42:48.964857 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:42:48.965887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 23:42:48.966860 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 23:42:48.968240 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 23:42:48.969211 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 23:42:48.970214 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 23:42:48.971273 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 23:42:48.972094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 23:42:48.972967 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 23:42:48.973012 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:42:48.973616 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:42:48.975889 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 23:42:48.978630 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 23:42:48.982045 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 23:42:48.983480 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 23:42:48.984365 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 23:42:48.987920 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 23:42:48.989233 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 23:42:48.991134 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 23:42:48.992209 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:42:48.992938 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:42:48.993624 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:42:48.993660 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:42:48.995009 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 23:42:48.998643 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 23:42:49.011740 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 23:42:49.016823 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 23:42:49.019864 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 23:42:49.024093 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 23:42:49.025643 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 23:42:49.028932 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 23:42:49.037757 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 23:42:49.042303 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 14 23:42:49.048063 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 23:42:49.054769 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 23:42:49.060637 extend-filesystems[1588]: Found /dev/sda6 Jan 14 23:42:49.062837 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 23:42:49.063542 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 23:42:49.064141 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 23:42:49.066039 jq[1587]: false Jan 14 23:42:49.066609 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 23:42:49.081421 extend-filesystems[1588]: Found /dev/sda9 Jan 14 23:42:49.081421 extend-filesystems[1588]: Checking size of /dev/sda9 Jan 14 23:42:49.078675 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 23:42:49.084194 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 23:42:49.099784 coreos-metadata[1582]: Jan 14 23:42:49.093 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 14 23:42:49.100216 extend-filesystems[1588]: Resized partition /dev/sda9 Jan 14 23:42:49.101912 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 14 23:42:49.086909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 23:42:49.102057 extend-filesystems[1609]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 23:42:49.087197 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 23:42:49.087520 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 23:42:49.087748 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 23:42:49.104239 coreos-metadata[1582]: Jan 14 23:42:49.104 INFO Fetch successful Jan 14 23:42:49.104239 coreos-metadata[1582]: Jan 14 23:42:49.104 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 14 23:42:49.105953 coreos-metadata[1582]: Jan 14 23:42:49.105 INFO Fetch successful Jan 14 23:42:49.148943 jq[1600]: true Jan 14 23:42:49.179942 update_engine[1599]: I20260114 23:42:49.177981 1599 main.cc:92] Flatcar Update Engine starting Jan 14 23:42:49.194272 tar[1622]: linux-arm64/LICENSE Jan 14 23:42:49.195383 tar[1622]: linux-arm64/helm Jan 14 23:42:49.198928 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 23:42:49.199188 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 23:42:49.219613 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 14 23:42:49.221521 jq[1630]: true Jan 14 23:42:49.229505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 23:42:49.229126 dbus-daemon[1583]: [system] SELinux support is enabled Jan 14 23:42:49.235209 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 23:42:49.235243 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 23:42:49.236246 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 23:42:49.236266 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 23:42:49.249499 extend-filesystems[1609]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 23:42:49.249499 extend-filesystems[1609]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 14 23:42:49.249499 extend-filesystems[1609]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 14 23:42:49.261279 extend-filesystems[1588]: Resized filesystem in /dev/sda9 Jan 14 23:42:49.270620 update_engine[1599]: I20260114 23:42:49.258647 1599 update_check_scheduler.cc:74] Next update check in 2m41s Jan 14 23:42:49.252475 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 23:42:49.253927 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 23:42:49.265519 systemd[1]: Started update-engine.service - Update Engine. Jan 14 23:42:49.285659 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 23:42:49.302419 bash[1664]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:42:49.315489 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 23:42:49.322583 systemd[1]: Starting sshkeys.service... Jan 14 23:42:49.378543 systemd-logind[1597]: New seat seat0. Jan 14 23:42:49.388644 systemd-logind[1597]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 23:42:49.388677 systemd-logind[1597]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 14 23:42:49.389141 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 23:42:49.400964 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 23:42:49.403509 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 23:42:49.419036 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 23:42:49.422726 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 23:42:49.525419 coreos-metadata[1674]: Jan 14 23:42:49.522 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 14 23:42:49.527407 coreos-metadata[1674]: Jan 14 23:42:49.527 INFO Fetch successful Jan 14 23:42:49.527592 sshd_keygen[1631]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 23:42:49.530613 unknown[1674]: wrote ssh authorized keys file for user: core Jan 14 23:42:49.566544 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 23:42:49.574381 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 23:42:49.580092 update-ssh-keys[1683]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:42:49.580526 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 23:42:49.588062 systemd[1]: Finished sshkeys.service. Jan 14 23:42:49.601318 containerd[1618]: time="2026-01-14T23:42:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 23:42:49.602248 containerd[1618]: time="2026-01-14T23:42:49.602200680Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 23:42:49.612441 locksmithd[1660]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 23:42:49.614343 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 23:42:49.617013 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 23:42:49.622075 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 23:42:49.633967 containerd[1618]: time="2026-01-14T23:42:49.633532120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.96µs" Jan 14 23:42:49.633967 containerd[1618]: time="2026-01-14T23:42:49.633571520Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 23:42:49.633967 containerd[1618]: time="2026-01-14T23:42:49.633618800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 23:42:49.633967 containerd[1618]: time="2026-01-14T23:42:49.633631240Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 23:42:49.634467 containerd[1618]: time="2026-01-14T23:42:49.634404840Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 23:42:49.634467 containerd[1618]: time="2026-01-14T23:42:49.634464840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:42:49.635498 containerd[1618]: time="2026-01-14T23:42:49.634573800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:42:49.636503 containerd[1618]: time="2026-01-14T23:42:49.636457000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.636911 containerd[1618]: time="2026-01-14T23:42:49.636879040Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.636911 containerd[1618]: time="2026-01-14T23:42:49.636906040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:42:49.636960 containerd[1618]: time="2026-01-14T23:42:49.636922840Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:42:49.636960 containerd[1618]: time="2026-01-14T23:42:49.636932080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637163 containerd[1618]: time="2026-01-14T23:42:49.637099080Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637163 containerd[1618]: time="2026-01-14T23:42:49.637130760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637235 containerd[1618]: time="2026-01-14T23:42:49.637215480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637554 containerd[1618]: time="2026-01-14T23:42:49.637414200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637554 containerd[1618]: time="2026-01-14T23:42:49.637455560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:42:49.637554 containerd[1618]: time="2026-01-14T23:42:49.637466840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 23:42:49.637554 containerd[1618]: time="2026-01-14T23:42:49.637502440Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 23:42:49.637841 containerd[1618]: time="2026-01-14T23:42:49.637810400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 23:42:49.638871 containerd[1618]: time="2026-01-14T23:42:49.637901040Z" level=info msg="metadata content store policy set" policy=shared Jan 14 23:42:49.644211 containerd[1618]: time="2026-01-14T23:42:49.644151840Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 23:42:49.644335 containerd[1618]: time="2026-01-14T23:42:49.644242600Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:42:49.644494 containerd[1618]: time="2026-01-14T23:42:49.644459480Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644543800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644616800Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644638640Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644653920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644679400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644756320Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644775960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644792240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644806640Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644841800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.644860360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.645085040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.645111040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 23:42:49.646428 containerd[1618]: time="2026-01-14T23:42:49.645127480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645139800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645167120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645178960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645193680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645212680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645234240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645247920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645258840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645293520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645736640Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645773600Z" level=info msg="Start snapshots syncer" Jan 14 23:42:49.646733 containerd[1618]: time="2026-01-14T23:42:49.645813240Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 23:42:49.646917 containerd[1618]: time="2026-01-14T23:42:49.646510480Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 23:42:49.646917 containerd[1618]: time="2026-01-14T23:42:49.646572280Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 23:42:49.647021 containerd[1618]: time="2026-01-14T23:42:49.646638800Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 23:42:49.647021 containerd[1618]: time="2026-01-14T23:42:49.646815960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 23:42:49.647021 containerd[1618]: time="2026-01-14T23:42:49.646975400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 23:42:49.647021 containerd[1618]: time="2026-01-14T23:42:49.647001000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 23:42:49.647021 containerd[1618]: time="2026-01-14T23:42:49.647016960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 23:42:49.647105 containerd[1618]: time="2026-01-14T23:42:49.647078800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 23:42:49.647123 containerd[1618]: time="2026-01-14T23:42:49.647110400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 23:42:49.647141 containerd[1618]: time="2026-01-14T23:42:49.647128360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 23:42:49.647158 containerd[1618]: time="2026-01-14T23:42:49.647148920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647216360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647278240Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647304240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647317400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647472680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647551520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647577200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647593200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647652440Z" level=info msg="runtime interface created" Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647663000Z" level=info msg="created NRI interface" Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647679520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647761320Z" level=info msg="Connect containerd service" Jan 14 23:42:49.648409 containerd[1618]: time="2026-01-14T23:42:49.647810600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 23:42:49.651565 containerd[1618]: time="2026-01-14T23:42:49.651156240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:42:49.653136 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 23:42:49.657294 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 23:42:49.660854 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 23:42:49.662853 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 23:42:49.737954 containerd[1618]: time="2026-01-14T23:42:49.737815880Z" level=info msg="Start subscribing containerd event" Jan 14 23:42:49.738274 containerd[1618]: time="2026-01-14T23:42:49.738248880Z" level=info msg="Start recovering state" Jan 14 23:42:49.738387 containerd[1618]: time="2026-01-14T23:42:49.738364240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 23:42:49.738516 containerd[1618]: time="2026-01-14T23:42:49.738499920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 23:42:49.738597 containerd[1618]: time="2026-01-14T23:42:49.738367560Z" level=info msg="Start event monitor" Jan 14 23:42:49.738629 containerd[1618]: time="2026-01-14T23:42:49.738599120Z" level=info msg="Start cni network conf syncer for default" Jan 14 23:42:49.738629 containerd[1618]: time="2026-01-14T23:42:49.738615440Z" level=info msg="Start streaming server" Jan 14 23:42:49.738629 containerd[1618]: time="2026-01-14T23:42:49.738626440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 23:42:49.738740 containerd[1618]: time="2026-01-14T23:42:49.738636800Z" level=info msg="runtime interface starting up..." Jan 14 23:42:49.738740 containerd[1618]: time="2026-01-14T23:42:49.738643800Z" level=info msg="starting plugins..." Jan 14 23:42:49.738740 containerd[1618]: time="2026-01-14T23:42:49.738662760Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 23:42:49.738861 containerd[1618]: time="2026-01-14T23:42:49.738835880Z" level=info msg="containerd successfully booted in 0.137902s" Jan 14 23:42:49.739189 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 23:42:49.835171 tar[1622]: linux-arm64/README.md Jan 14 23:42:49.855590 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 23:42:50.205601 systemd-networkd[1503]: eth0: Gained IPv6LL Jan 14 23:42:50.206455 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 14 23:42:50.209952 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 23:42:50.213095 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 23:42:50.217188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:42:50.221710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 23:42:50.252954 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 23:42:50.397631 systemd-networkd[1503]: eth1: Gained IPv6LL Jan 14 23:42:50.398292 systemd-timesyncd[1523]: Network configuration changed, trying to establish connection. Jan 14 23:42:50.989267 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:42:50.991445 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 23:42:50.993205 systemd[1]: Startup finished in 1.900s (kernel) + 4.974s (initrd) + 4.888s (userspace) = 11.763s. Jan 14 23:42:51.000261 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:42:51.437970 kubelet[1737]: E0114 23:42:51.437883 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:42:51.442106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:42:51.442292 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:42:51.443191 systemd[1]: kubelet.service: Consumed 803ms CPU time, 248.5M memory peak. Jan 14 23:43:01.692868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 23:43:01.696810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:01.868692 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:01.879997 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:01.930825 kubelet[1756]: E0114 23:43:01.930748 1756 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:01.933961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:01.934127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:01.934886 systemd[1]: kubelet.service: Consumed 179ms CPU time, 109.4M memory peak. Jan 14 23:43:12.184646 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 23:43:12.187872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:12.368027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:12.383004 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:12.427367 kubelet[1771]: E0114 23:43:12.427311 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:12.430582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:12.430721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:12.431533 systemd[1]: kubelet.service: Consumed 179ms CPU time, 106.8M memory peak. Jan 14 23:43:19.207701 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 23:43:19.210066 systemd[1]: Started sshd@0-46.224.65.210:22-68.220.241.50:48886.service - OpenSSH per-connection server daemon (68.220.241.50:48886). Jan 14 23:43:19.790072 sshd[1779]: Accepted publickey for core from 68.220.241.50 port 48886 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:19.793825 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:19.803351 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 23:43:19.806718 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 23:43:19.813083 systemd-logind[1597]: New session 1 of user core. Jan 14 23:43:19.839606 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 23:43:19.843490 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 23:43:19.868181 (systemd)[1784]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 23:43:19.871704 systemd-logind[1597]: New session c1 of user core. Jan 14 23:43:20.016259 systemd[1784]: Queued start job for default target default.target. Jan 14 23:43:20.028545 systemd[1784]: Created slice app.slice - User Application Slice. Jan 14 23:43:20.028612 systemd[1784]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 23:43:20.028640 systemd[1784]: Reached target paths.target - Paths. Jan 14 23:43:20.028729 systemd[1784]: Reached target timers.target - Timers. Jan 14 23:43:20.031025 systemd[1784]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 23:43:20.031928 systemd[1784]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 23:43:20.045891 systemd[1784]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 23:43:20.045992 systemd[1784]: Reached target sockets.target - Sockets. Jan 14 23:43:20.049471 systemd[1784]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 23:43:20.049574 systemd[1784]: Reached target basic.target - Basic System. Jan 14 23:43:20.049649 systemd[1784]: Reached target default.target - Main User Target. Jan 14 23:43:20.049679 systemd[1784]: Startup finished in 169ms. Jan 14 23:43:20.050076 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 23:43:20.061099 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 23:43:20.379778 systemd[1]: Started sshd@1-46.224.65.210:22-68.220.241.50:48894.service - OpenSSH per-connection server daemon (68.220.241.50:48894). Jan 14 23:43:20.606216 systemd-timesyncd[1523]: Contacted time server 78.47.168.188:123 (2.flatcar.pool.ntp.org). Jan 14 23:43:20.606359 systemd-timesyncd[1523]: Initial clock synchronization to Wed 2026-01-14 23:43:20.504352 UTC. Jan 14 23:43:20.942428 sshd[1797]: Accepted publickey for core from 68.220.241.50 port 48894 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:20.944721 sshd-session[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:20.952076 systemd-logind[1597]: New session 2 of user core. Jan 14 23:43:20.957882 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 23:43:21.240561 sshd[1800]: Connection closed by 68.220.241.50 port 48894 Jan 14 23:43:21.241190 sshd-session[1797]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:21.246451 systemd-logind[1597]: Session 2 logged out. Waiting for processes to exit. Jan 14 23:43:21.246682 systemd[1]: sshd@1-46.224.65.210:22-68.220.241.50:48894.service: Deactivated successfully. Jan 14 23:43:21.249726 systemd[1]: session-2.scope: Deactivated successfully. Jan 14 23:43:21.255387 systemd-logind[1597]: Removed session 2. Jan 14 23:43:21.350775 systemd[1]: Started sshd@2-46.224.65.210:22-68.220.241.50:48896.service - OpenSSH per-connection server daemon (68.220.241.50:48896). Jan 14 23:43:21.898225 sshd[1806]: Accepted publickey for core from 68.220.241.50 port 48896 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:21.900538 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:21.909238 systemd-logind[1597]: New session 3 of user core. Jan 14 23:43:21.915834 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 23:43:22.181337 sshd[1809]: Connection closed by 68.220.241.50 port 48896 Jan 14 23:43:22.182217 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:22.189872 systemd-logind[1597]: Session 3 logged out. Waiting for processes to exit. Jan 14 23:43:22.190377 systemd[1]: sshd@2-46.224.65.210:22-68.220.241.50:48896.service: Deactivated successfully. Jan 14 23:43:22.192545 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 23:43:22.194818 systemd-logind[1597]: Removed session 3. Jan 14 23:43:22.307763 systemd[1]: Started sshd@3-46.224.65.210:22-68.220.241.50:48900.service - OpenSSH per-connection server daemon (68.220.241.50:48900). Jan 14 23:43:22.552858 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 23:43:22.555314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:22.721836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:22.737277 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:22.789255 kubelet[1826]: E0114 23:43:22.789182 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:22.793054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:22.793280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:22.793862 systemd[1]: kubelet.service: Consumed 181ms CPU time, 106.7M memory peak. Jan 14 23:43:22.863286 sshd[1815]: Accepted publickey for core from 68.220.241.50 port 48900 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:22.863915 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:22.877046 systemd-logind[1597]: New session 4 of user core. Jan 14 23:43:22.882780 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 23:43:23.167562 sshd[1833]: Connection closed by 68.220.241.50 port 48900 Jan 14 23:43:23.168148 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:23.175756 systemd[1]: sshd@3-46.224.65.210:22-68.220.241.50:48900.service: Deactivated successfully. Jan 14 23:43:23.178544 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 23:43:23.180526 systemd-logind[1597]: Session 4 logged out. Waiting for processes to exit. Jan 14 23:43:23.182034 systemd-logind[1597]: Removed session 4. Jan 14 23:43:23.274350 systemd[1]: Started sshd@4-46.224.65.210:22-68.220.241.50:36198.service - OpenSSH per-connection server daemon (68.220.241.50:36198). Jan 14 23:43:23.804030 sshd[1839]: Accepted publickey for core from 68.220.241.50 port 36198 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:23.806589 sshd-session[1839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:23.813345 systemd-logind[1597]: New session 5 of user core. Jan 14 23:43:23.819785 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 23:43:24.010272 sudo[1843]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 23:43:24.010605 sudo[1843]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:24.024490 sudo[1843]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:24.120340 sshd[1842]: Connection closed by 68.220.241.50 port 36198 Jan 14 23:43:24.121572 sshd-session[1839]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:24.130422 systemd[1]: sshd@4-46.224.65.210:22-68.220.241.50:36198.service: Deactivated successfully. Jan 14 23:43:24.135263 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 23:43:24.136552 systemd-logind[1597]: Session 5 logged out. Waiting for processes to exit. Jan 14 23:43:24.138313 systemd-logind[1597]: Removed session 5. Jan 14 23:43:24.234451 systemd[1]: Started sshd@5-46.224.65.210:22-68.220.241.50:36204.service - OpenSSH per-connection server daemon (68.220.241.50:36204). Jan 14 23:43:24.780114 sshd[1849]: Accepted publickey for core from 68.220.241.50 port 36204 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:24.782118 sshd-session[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:24.787348 systemd-logind[1597]: New session 6 of user core. Jan 14 23:43:24.798787 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 23:43:24.982166 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 23:43:24.982493 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:24.989366 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:24.999543 sudo[1853]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 23:43:24.999960 sudo[1853]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:25.013312 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:43:25.061000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:43:25.062715 augenrules[1876]: No rules Jan 14 23:43:25.062964 kernel: kauditd_printk_skb: 193 callbacks suppressed Jan 14 23:43:25.063002 kernel: audit: type=1305 audit(1768434205.061:236): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:43:25.061000 audit[1876]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8ed4380 a2=420 a3=0 items=0 ppid=1857 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:25.070166 kernel: audit: type=1300 audit(1768434205.061:236): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc8ed4380 a2=420 a3=0 items=0 ppid=1857 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:25.070268 kernel: audit: type=1327 audit(1768434205.061:236): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:43:25.061000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:43:25.071121 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:43:25.071688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:43:25.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.077060 sudo[1853]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:25.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.080926 kernel: audit: type=1130 audit(1768434205.073:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.080983 kernel: audit: type=1131 audit(1768434205.073:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.076000 audit[1853]: USER_END pid=1853 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.084767 kernel: audit: type=1106 audit(1768434205.076:239): pid=1853 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.076000 audit[1853]: CRED_DISP pid=1853 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.086800 kernel: audit: type=1104 audit(1768434205.076:240): pid=1853 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.177114 sshd[1852]: Connection closed by 68.220.241.50 port 36204 Jan 14 23:43:25.178028 sshd-session[1849]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:25.180000 audit[1849]: USER_END pid=1849 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.180000 audit[1849]: CRED_DISP pid=1849 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.184954 systemd-logind[1597]: Session 6 logged out. Waiting for processes to exit. Jan 14 23:43:25.185807 kernel: audit: type=1106 audit(1768434205.180:241): pid=1849 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.185868 kernel: audit: type=1104 audit(1768434205.180:242): pid=1849 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.187624 systemd[1]: sshd@5-46.224.65.210:22-68.220.241.50:36204.service: Deactivated successfully. Jan 14 23:43:25.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-46.224.65.210:22-68.220.241.50:36204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.190606 kernel: audit: type=1131 audit(1768434205.187:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-46.224.65.210:22-68.220.241.50:36204 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.191704 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 23:43:25.193572 systemd-logind[1597]: Removed session 6. Jan 14 23:43:25.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-46.224.65.210:22-68.220.241.50:36216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.289607 systemd[1]: Started sshd@6-46.224.65.210:22-68.220.241.50:36216.service - OpenSSH per-connection server daemon (68.220.241.50:36216). Jan 14 23:43:25.830000 audit[1885]: USER_ACCT pid=1885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.831766 sshd[1885]: Accepted publickey for core from 68.220.241.50 port 36216 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:25.832000 audit[1885]: CRED_ACQ pid=1885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.832000 audit[1885]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb098f60 a2=3 a3=0 items=0 ppid=1 pid=1885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:25.832000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:43:25.834949 sshd-session[1885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:25.842469 systemd-logind[1597]: New session 7 of user core. Jan 14 23:43:25.848799 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 23:43:25.853000 audit[1885]: USER_START pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.855000 audit[1888]: CRED_ACQ pid=1888 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:26.036000 audit[1889]: USER_ACCT pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.036000 audit[1889]: CRED_REFR pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.038213 sudo[1889]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 23:43:26.038519 sudo[1889]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:26.040000 audit[1889]: USER_START pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.363726 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 23:43:26.386039 (dockerd)[1907]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 23:43:26.638510 dockerd[1907]: time="2026-01-14T23:43:26.637250007Z" level=info msg="Starting up" Jan 14 23:43:26.642352 dockerd[1907]: time="2026-01-14T23:43:26.641726018Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 23:43:26.655936 dockerd[1907]: time="2026-01-14T23:43:26.655862089Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 23:43:26.673935 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3516730080-merged.mount: Deactivated successfully. Jan 14 23:43:26.691426 systemd[1]: var-lib-docker-metacopy\x2dcheck3449730232-merged.mount: Deactivated successfully. Jan 14 23:43:26.700435 dockerd[1907]: time="2026-01-14T23:43:26.700349031Z" level=info msg="Loading containers: start." Jan 14 23:43:26.710658 kernel: Initializing XFRM netlink socket Jan 14 23:43:26.770000 audit[1957]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.770000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe6fd6f60 a2=0 a3=0 items=0 ppid=1907 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:43:26.772000 audit[1959]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.772000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffcf09900 a2=0 a3=0 items=0 ppid=1907 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:43:26.774000 audit[1961]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.774000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffee257f20 a2=0 a3=0 items=0 ppid=1907 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.774000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:43:26.778000 audit[1963]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.778000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe53fd90 a2=0 a3=0 items=0 ppid=1907 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.778000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:43:26.781000 audit[1965]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.781000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe50a7140 a2=0 a3=0 items=0 ppid=1907 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.781000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:43:26.784000 audit[1967]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.784000 audit[1967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffefac6d0 a2=0 a3=0 items=0 ppid=1907 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.784000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:26.787000 audit[1969]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.787000 audit[1969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc14ad810 a2=0 a3=0 items=0 ppid=1907 pid=1969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:26.789000 audit[1971]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1971 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.789000 audit[1971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffe9b38f70 a2=0 a3=0 items=0 ppid=1907 pid=1971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:43:26.822000 audit[1974]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.822000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffda37d220 a2=0 a3=0 items=0 ppid=1907 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.822000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 23:43:26.825000 audit[1976]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.825000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffad6d2b0 a2=0 a3=0 items=0 ppid=1907 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.825000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:43:26.828000 audit[1978]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.828000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd53d4610 a2=0 a3=0 items=0 ppid=1907 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.828000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:43:26.830000 audit[1980]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1980 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.830000 audit[1980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc56c1e50 a2=0 a3=0 items=0 ppid=1907 pid=1980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.830000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:26.832000 audit[1982]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.832000 audit[1982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffd70d780 a2=0 a3=0 items=0 ppid=1907 pid=1982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.832000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:43:26.870000 audit[2012]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.870000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe38b1980 a2=0 a3=0 items=0 ppid=1907 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.870000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:43:26.872000 audit[2014]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.872000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd2170cc0 a2=0 a3=0 items=0 ppid=1907 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.872000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:43:26.874000 audit[2016]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.874000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe514bc20 a2=0 a3=0 items=0 ppid=1907 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.874000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:43:26.876000 audit[2018]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.876000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff83275b0 a2=0 a3=0 items=0 ppid=1907 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.876000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:43:26.878000 audit[2020]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.878000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1680e70 a2=0 a3=0 items=0 ppid=1907 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:43:26.880000 audit[2022]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.880000 audit[2022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdb172460 a2=0 a3=0 items=0 ppid=1907 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.880000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:26.883000 audit[2024]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.883000 audit[2024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdc14b8c0 a2=0 a3=0 items=0 ppid=1907 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.883000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:26.885000 audit[2026]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.885000 audit[2026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc44037c0 a2=0 a3=0 items=0 ppid=1907 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:43:26.888000 audit[2028]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.888000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc8b01e70 a2=0 a3=0 items=0 ppid=1907 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 23:43:26.890000 audit[2030]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.890000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffbc517d0 a2=0 a3=0 items=0 ppid=1907 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:43:26.893000 audit[2032]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.893000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe28ecc30 a2=0 a3=0 items=0 ppid=1907 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:43:26.895000 audit[2034]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.895000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff964d6e0 a2=0 a3=0 items=0 ppid=1907 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:26.897000 audit[2036]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.897000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff5ac02f0 a2=0 a3=0 items=0 ppid=1907 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.897000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:43:26.902000 audit[2041]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.902000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc541a2f0 a2=0 a3=0 items=0 ppid=1907 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.902000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:43:26.905000 audit[2043]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.905000 audit[2043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffca894d30 a2=0 a3=0 items=0 ppid=1907 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.905000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:43:26.907000 audit[2045]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.907000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc0856ad0 a2=0 a3=0 items=0 ppid=1907 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.907000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:43:26.909000 audit[2047]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.909000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff79874e0 a2=0 a3=0 items=0 ppid=1907 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.909000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:43:26.912000 audit[2049]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.912000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe9e0fe40 a2=0 a3=0 items=0 ppid=1907 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.912000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:43:26.915000 audit[2051]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:26.915000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffce428e40 a2=0 a3=0 items=0 ppid=1907 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.915000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:43:26.940000 audit[2055]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.940000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd9aa3190 a2=0 a3=0 items=0 ppid=1907 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.940000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 23:43:26.942000 audit[2057]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.942000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff0ac9b50 a2=0 a3=0 items=0 ppid=1907 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.942000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 23:43:26.951000 audit[2065]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.951000 audit[2065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffdb4f2bc0 a2=0 a3=0 items=0 ppid=1907 pid=2065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 23:43:26.964000 audit[2071]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.964000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff34c2dc0 a2=0 a3=0 items=0 ppid=1907 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 23:43:26.970000 audit[2073]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.970000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff33d7560 a2=0 a3=0 items=0 ppid=1907 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.970000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 23:43:26.972000 audit[2075]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.972000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcde1e210 a2=0 a3=0 items=0 ppid=1907 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 23:43:26.975000 audit[2077]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.975000 audit[2077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffecc38300 a2=0 a3=0 items=0 ppid=1907 pid=2077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:26.978000 audit[2079]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:26.978000 audit[2079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc59c29f0 a2=0 a3=0 items=0 ppid=1907 pid=2079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.978000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 23:43:26.981054 systemd-networkd[1503]: docker0: Link UP Jan 14 23:43:26.986350 dockerd[1907]: time="2026-01-14T23:43:26.986279208Z" level=info msg="Loading containers: done." Jan 14 23:43:27.011533 dockerd[1907]: time="2026-01-14T23:43:27.011350780Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 23:43:27.011793 dockerd[1907]: time="2026-01-14T23:43:27.011586349Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 23:43:27.011915 dockerd[1907]: time="2026-01-14T23:43:27.011848725Z" level=info msg="Initializing buildkit" Jan 14 23:43:27.036941 dockerd[1907]: time="2026-01-14T23:43:27.036863684Z" level=info msg="Completed buildkit initialization" Jan 14 23:43:27.045618 dockerd[1907]: time="2026-01-14T23:43:27.045535810Z" level=info msg="Daemon has completed initialization" Jan 14 23:43:27.046253 dockerd[1907]: time="2026-01-14T23:43:27.045807294Z" level=info msg="API listen on /run/docker.sock" Jan 14 23:43:27.046728 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 23:43:27.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:28.129060 containerd[1618]: time="2026-01-14T23:43:28.128932295Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 14 23:43:28.966009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount126428699.mount: Deactivated successfully. Jan 14 23:43:29.675426 containerd[1618]: time="2026-01-14T23:43:29.675311932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:29.678439 containerd[1618]: time="2026-01-14T23:43:29.678211423Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Jan 14 23:43:29.679985 containerd[1618]: time="2026-01-14T23:43:29.679919018Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:29.683268 containerd[1618]: time="2026-01-14T23:43:29.683197010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:29.685626 containerd[1618]: time="2026-01-14T23:43:29.684573576Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.55558818s" Jan 14 23:43:29.685626 containerd[1618]: time="2026-01-14T23:43:29.684651395Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 14 23:43:29.686234 containerd[1618]: time="2026-01-14T23:43:29.686194711Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 14 23:43:31.311224 containerd[1618]: time="2026-01-14T23:43:31.311172137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.313510 containerd[1618]: time="2026-01-14T23:43:31.313346607Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Jan 14 23:43:31.314827 containerd[1618]: time="2026-01-14T23:43:31.314741929Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.317749 containerd[1618]: time="2026-01-14T23:43:31.317677647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.319642 containerd[1618]: time="2026-01-14T23:43:31.319577598Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.633321866s" Jan 14 23:43:31.319642 containerd[1618]: time="2026-01-14T23:43:31.319635524Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 14 23:43:31.320555 containerd[1618]: time="2026-01-14T23:43:31.320498032Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 14 23:43:32.355291 containerd[1618]: time="2026-01-14T23:43:32.355192650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:32.357983 containerd[1618]: time="2026-01-14T23:43:32.357270039Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14183580" Jan 14 23:43:32.359186 containerd[1618]: time="2026-01-14T23:43:32.359138281Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:32.362619 containerd[1618]: time="2026-01-14T23:43:32.362567427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:32.364761 containerd[1618]: time="2026-01-14T23:43:32.364671098Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.044123585s" Jan 14 23:43:32.364761 containerd[1618]: time="2026-01-14T23:43:32.364750745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 14 23:43:32.365466 containerd[1618]: time="2026-01-14T23:43:32.365379142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 14 23:43:32.803224 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 23:43:32.806324 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:33.005557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:33.008902 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 23:43:33.008968 kernel: audit: type=1130 audit(1768434213.005:294): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:33.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:33.018983 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:33.067295 kubelet[2198]: E0114 23:43:33.067128 2198 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:33.071156 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:33.071290 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:33.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:33.074508 systemd[1]: kubelet.service: Consumed 181ms CPU time, 107.1M memory peak. Jan 14 23:43:33.076429 kernel: audit: type=1131 audit(1768434213.072:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:33.357944 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495350894.mount: Deactivated successfully. Jan 14 23:43:33.631606 containerd[1618]: time="2026-01-14T23:43:33.631035019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.632747 containerd[1618]: time="2026-01-14T23:43:33.632390223Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=12960889" Jan 14 23:43:33.633631 containerd[1618]: time="2026-01-14T23:43:33.633586755Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.636379 containerd[1618]: time="2026-01-14T23:43:33.636340532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.636915 containerd[1618]: time="2026-01-14T23:43:33.636875121Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.271422151s" Jan 14 23:43:33.636915 containerd[1618]: time="2026-01-14T23:43:33.636912186Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 14 23:43:33.637500 containerd[1618]: time="2026-01-14T23:43:33.637328479Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 14 23:43:34.229619 update_engine[1599]: I20260114 23:43:34.229485 1599 update_attempter.cc:509] Updating boot flags... Jan 14 23:43:34.342361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3735532992.mount: Deactivated successfully. Jan 14 23:43:35.162434 containerd[1618]: time="2026-01-14T23:43:35.161253307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.163229 containerd[1618]: time="2026-01-14T23:43:35.163144436Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=19575910" Jan 14 23:43:35.164382 containerd[1618]: time="2026-01-14T23:43:35.164325558Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.167883 containerd[1618]: time="2026-01-14T23:43:35.167821258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.169470 containerd[1618]: time="2026-01-14T23:43:35.169414252Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.532026545s" Jan 14 23:43:35.169470 containerd[1618]: time="2026-01-14T23:43:35.169465751Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 14 23:43:35.170125 containerd[1618]: time="2026-01-14T23:43:35.170090764Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 14 23:43:35.679796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740708513.mount: Deactivated successfully. Jan 14 23:43:35.686813 containerd[1618]: time="2026-01-14T23:43:35.686727400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.688777 containerd[1618]: time="2026-01-14T23:43:35.688698253Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Jan 14 23:43:35.689665 containerd[1618]: time="2026-01-14T23:43:35.689601720Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.693021 containerd[1618]: time="2026-01-14T23:43:35.692961886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.694434 containerd[1618]: time="2026-01-14T23:43:35.693428610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 523.301317ms" Jan 14 23:43:35.694434 containerd[1618]: time="2026-01-14T23:43:35.693461545Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 14 23:43:35.694757 containerd[1618]: time="2026-01-14T23:43:35.694648137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 14 23:43:36.495686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3250605768.mount: Deactivated successfully. Jan 14 23:43:39.215845 containerd[1618]: time="2026-01-14T23:43:39.215779109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.217468 containerd[1618]: time="2026-01-14T23:43:39.217110498Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=96314798" Jan 14 23:43:39.218553 containerd[1618]: time="2026-01-14T23:43:39.218512087Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.221863 containerd[1618]: time="2026-01-14T23:43:39.221810257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.223208 containerd[1618]: time="2026-01-14T23:43:39.223172091Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 3.528453804s" Jan 14 23:43:39.223326 containerd[1618]: time="2026-01-14T23:43:39.223310292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 14 23:43:43.303188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 23:43:43.306612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:43.450623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:43.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:43.454437 kernel: audit: type=1130 audit(1768434223.449:296): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:43.463921 (kubelet)[2363]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:43.511165 kubelet[2363]: E0114 23:43:43.511113 2363 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:43.514862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:43.515145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:43.516056 systemd[1]: kubelet.service: Consumed 161ms CPU time, 106.8M memory peak. Jan 14 23:43:43.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:43.520434 kernel: audit: type=1131 audit(1768434223.514:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:44.785045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:44.785484 systemd[1]: kubelet.service: Consumed 161ms CPU time, 106.8M memory peak. Jan 14 23:43:44.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:44.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:44.789745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:44.791035 kernel: audit: type=1130 audit(1768434224.785:298): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:44.791140 kernel: audit: type=1131 audit(1768434224.785:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:44.829666 systemd[1]: Reload requested from client PID 2378 ('systemctl') (unit session-7.scope)... Jan 14 23:43:44.829694 systemd[1]: Reloading... Jan 14 23:43:44.982428 zram_generator::config[2443]: No configuration found. Jan 14 23:43:45.165838 systemd[1]: Reloading finished in 335 ms. Jan 14 23:43:45.201732 kernel: audit: type=1334 audit(1768434225.196:300): prog-id=63 op=LOAD Jan 14 23:43:45.201837 kernel: audit: type=1334 audit(1768434225.196:301): prog-id=43 op=UNLOAD Jan 14 23:43:45.201860 kernel: audit: type=1334 audit(1768434225.196:302): prog-id=64 op=LOAD Jan 14 23:43:45.201889 kernel: audit: type=1334 audit(1768434225.199:303): prog-id=65 op=LOAD Jan 14 23:43:45.196000 audit: BPF prog-id=63 op=LOAD Jan 14 23:43:45.196000 audit: BPF prog-id=43 op=UNLOAD Jan 14 23:43:45.196000 audit: BPF prog-id=64 op=LOAD Jan 14 23:43:45.199000 audit: BPF prog-id=65 op=LOAD Jan 14 23:43:45.203063 kernel: audit: type=1334 audit(1768434225.199:304): prog-id=44 op=UNLOAD Jan 14 23:43:45.203130 kernel: audit: type=1334 audit(1768434225.199:305): prog-id=45 op=UNLOAD Jan 14 23:43:45.199000 audit: BPF prog-id=44 op=UNLOAD Jan 14 23:43:45.199000 audit: BPF prog-id=45 op=UNLOAD Jan 14 23:43:45.200000 audit: BPF prog-id=66 op=LOAD Jan 14 23:43:45.200000 audit: BPF prog-id=46 op=UNLOAD Jan 14 23:43:45.203000 audit: BPF prog-id=67 op=LOAD Jan 14 23:43:45.203000 audit: BPF prog-id=60 op=UNLOAD Jan 14 23:43:45.203000 audit: BPF prog-id=68 op=LOAD Jan 14 23:43:45.203000 audit: BPF prog-id=69 op=LOAD Jan 14 23:43:45.203000 audit: BPF prog-id=61 op=UNLOAD Jan 14 23:43:45.203000 audit: BPF prog-id=62 op=UNLOAD Jan 14 23:43:45.204000 audit: BPF prog-id=70 op=LOAD Jan 14 23:43:45.204000 audit: BPF prog-id=55 op=UNLOAD Jan 14 23:43:45.204000 audit: BPF prog-id=71 op=LOAD Jan 14 23:43:45.204000 audit: BPF prog-id=72 op=LOAD Jan 14 23:43:45.204000 audit: BPF prog-id=56 op=UNLOAD Jan 14 23:43:45.204000 audit: BPF prog-id=57 op=UNLOAD Jan 14 23:43:45.208000 audit: BPF prog-id=73 op=LOAD Jan 14 23:43:45.208000 audit: BPF prog-id=74 op=LOAD Jan 14 23:43:45.208000 audit: BPF prog-id=50 op=UNLOAD Jan 14 23:43:45.208000 audit: BPF prog-id=51 op=UNLOAD Jan 14 23:43:45.208000 audit: BPF prog-id=75 op=LOAD Jan 14 23:43:45.214000 audit: BPF prog-id=58 op=UNLOAD Jan 14 23:43:45.215000 audit: BPF prog-id=76 op=LOAD Jan 14 23:43:45.215000 audit: BPF prog-id=47 op=UNLOAD Jan 14 23:43:45.216000 audit: BPF prog-id=77 op=LOAD Jan 14 23:43:45.216000 audit: BPF prog-id=78 op=LOAD Jan 14 23:43:45.216000 audit: BPF prog-id=48 op=UNLOAD Jan 14 23:43:45.216000 audit: BPF prog-id=49 op=UNLOAD Jan 14 23:43:45.217000 audit: BPF prog-id=79 op=LOAD Jan 14 23:43:45.217000 audit: BPF prog-id=52 op=UNLOAD Jan 14 23:43:45.217000 audit: BPF prog-id=80 op=LOAD Jan 14 23:43:45.217000 audit: BPF prog-id=81 op=LOAD Jan 14 23:43:45.217000 audit: BPF prog-id=53 op=UNLOAD Jan 14 23:43:45.217000 audit: BPF prog-id=54 op=UNLOAD Jan 14 23:43:45.217000 audit: BPF prog-id=82 op=LOAD Jan 14 23:43:45.217000 audit: BPF prog-id=59 op=UNLOAD Jan 14 23:43:45.233735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 23:43:45.233824 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 23:43:45.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:45.234196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:45.234253 systemd[1]: kubelet.service: Consumed 119ms CPU time, 95M memory peak. Jan 14 23:43:45.236328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:45.401196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:45.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:45.415972 (kubelet)[2473]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:43:45.462663 kubelet[2473]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:43:45.462663 kubelet[2473]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:45.463723 kubelet[2473]: I0114 23:43:45.463653 2473 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:43:45.752578 kubelet[2473]: I0114 23:43:45.752387 2473 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 23:43:45.752578 kubelet[2473]: I0114 23:43:45.752476 2473 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:43:45.752578 kubelet[2473]: I0114 23:43:45.752545 2473 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 23:43:45.752578 kubelet[2473]: I0114 23:43:45.752560 2473 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:43:45.753246 kubelet[2473]: I0114 23:43:45.753139 2473 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 23:43:45.765436 kubelet[2473]: I0114 23:43:45.765036 2473 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:43:45.765744 kubelet[2473]: E0114 23:43:45.765687 2473 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://46.224.65.210:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 23:43:45.771113 kubelet[2473]: I0114 23:43:45.771057 2473 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:43:45.774234 kubelet[2473]: I0114 23:43:45.774193 2473 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 23:43:45.774845 kubelet[2473]: I0114 23:43:45.774805 2473 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:43:45.775125 kubelet[2473]: I0114 23:43:45.774852 2473 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-ec6f9a8ce8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:43:45.775125 kubelet[2473]: I0114 23:43:45.775124 2473 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:43:45.775277 kubelet[2473]: I0114 23:43:45.775143 2473 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 23:43:45.775331 kubelet[2473]: I0114 23:43:45.775311 2473 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 23:43:45.778819 kubelet[2473]: I0114 23:43:45.778713 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:45.781281 kubelet[2473]: I0114 23:43:45.781066 2473 kubelet.go:475] "Attempting to sync node with API server" Jan 14 23:43:45.781281 kubelet[2473]: I0114 23:43:45.781125 2473 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:43:45.783492 kubelet[2473]: E0114 23:43:45.781907 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.224.65.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-ec6f9a8ce8&limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 23:43:45.783492 kubelet[2473]: I0114 23:43:45.782046 2473 kubelet.go:387] "Adding apiserver pod source" Jan 14 23:43:45.783492 kubelet[2473]: I0114 23:43:45.782079 2473 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:43:45.783842 kubelet[2473]: E0114 23:43:45.783812 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.65.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 23:43:45.784824 kubelet[2473]: I0114 23:43:45.784796 2473 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:43:45.785727 kubelet[2473]: I0114 23:43:45.785701 2473 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 23:43:45.785823 kubelet[2473]: I0114 23:43:45.785813 2473 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 23:43:45.785946 kubelet[2473]: W0114 23:43:45.785933 2473 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 23:43:45.789574 kubelet[2473]: I0114 23:43:45.789544 2473 server.go:1262] "Started kubelet" Jan 14 23:43:45.792251 kubelet[2473]: I0114 23:43:45.792206 2473 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:43:45.793311 kubelet[2473]: I0114 23:43:45.793179 2473 server.go:310] "Adding debug handlers to kubelet server" Jan 14 23:43:45.801168 kubelet[2473]: I0114 23:43:45.801119 2473 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:43:45.801705 kubelet[2473]: I0114 23:43:45.801349 2473 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:43:45.801705 kubelet[2473]: I0114 23:43:45.801454 2473 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 23:43:45.801705 kubelet[2473]: I0114 23:43:45.801641 2473 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:43:45.805203 kubelet[2473]: E0114 23:43:45.802800 2473 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://46.224.65.210:6443/api/v1/namespaces/default/events\": dial tcp 46.224.65.210:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-n-ec6f9a8ce8.188abd8f0cd39971 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-n-ec6f9a8ce8,UID:ci-4515-1-0-n-ec6f9a8ce8,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-ec6f9a8ce8,},FirstTimestamp:2026-01-14 23:43:45.789507953 +0000 UTC m=+0.368233453,LastTimestamp:2026-01-14 23:43:45.789507953 +0000 UTC m=+0.368233453,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-ec6f9a8ce8,}" Jan 14 23:43:45.807331 kubelet[2473]: I0114 23:43:45.807293 2473 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:43:45.812581 kubelet[2473]: I0114 23:43:45.812232 2473 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 23:43:45.812581 kubelet[2473]: E0114 23:43:45.812430 2473 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" Jan 14 23:43:45.812581 kubelet[2473]: I0114 23:43:45.812478 2473 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 23:43:45.812815 kubelet[2473]: I0114 23:43:45.812803 2473 reconciler.go:29] "Reconciler: start to sync state" Jan 14 23:43:45.813731 kubelet[2473]: E0114 23:43:45.813696 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.224.65.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 23:43:45.812000 audit[2489]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.812000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4e110d0 a2=0 a3=0 items=0 ppid=2473 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:43:45.814228 kubelet[2473]: E0114 23:43:45.813898 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.65.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-ec6f9a8ce8?timeout=10s\": dial tcp 46.224.65.210:6443: connect: connection refused" interval="200ms" Jan 14 23:43:45.814599 kubelet[2473]: I0114 23:43:45.814573 2473 factory.go:223] Registration of the systemd container factory successfully Jan 14 23:43:45.814775 kubelet[2473]: I0114 23:43:45.814754 2473 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:43:45.814000 audit[2490]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.814000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1488510 a2=0 a3=0 items=0 ppid=2473 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:43:45.816000 audit[2492]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.816000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe8af7a20 a2=0 a3=0 items=0 ppid=2473 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:43:45.819725 kubelet[2473]: I0114 23:43:45.817867 2473 factory.go:223] Registration of the containerd container factory successfully Jan 14 23:43:45.819000 audit[2494]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.819000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffef1bc610 a2=0 a3=0 items=0 ppid=2473 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:43:45.825000 audit[2497]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.825000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdc6a9850 a2=0 a3=0 items=0 ppid=2473 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Jan 14 23:43:45.827580 kubelet[2473]: I0114 23:43:45.827546 2473 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 23:43:45.827000 audit[2498]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:45.827000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcfaa11a0 a2=0 a3=0 items=0 ppid=2473 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:43:45.828780 kubelet[2473]: I0114 23:43:45.828746 2473 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 23:43:45.828780 kubelet[2473]: I0114 23:43:45.828780 2473 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 23:43:45.828854 kubelet[2473]: I0114 23:43:45.828825 2473 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 23:43:45.828898 kubelet[2473]: E0114 23:43:45.828875 2473 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:43:45.829000 audit[2500]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.829000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2b82000 a2=0 a3=0 items=0 ppid=2473 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:43:45.830000 audit[2501]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.830000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2eada60 a2=0 a3=0 items=0 ppid=2473 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:43:45.832000 audit[2502]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:45.832000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff110e7f0 a2=0 a3=0 items=0 ppid=2473 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:43:45.833000 audit[2503]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:45.833000 audit[2503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff21c0380 a2=0 a3=0 items=0 ppid=2473 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:43:45.835000 audit[2504]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:45.835000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0b0c070 a2=0 a3=0 items=0 ppid=2473 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:43:45.836000 audit[2505]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:45.836000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7d01f30 a2=0 a3=0 items=0 ppid=2473 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:45.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:43:45.839158 kubelet[2473]: E0114 23:43:45.839127 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.224.65.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 23:43:45.848101 kubelet[2473]: E0114 23:43:45.848039 2473 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:43:45.851667 kubelet[2473]: I0114 23:43:45.851623 2473 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:43:45.851667 kubelet[2473]: I0114 23:43:45.851645 2473 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:43:45.851667 kubelet[2473]: I0114 23:43:45.851669 2473 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:45.854471 kubelet[2473]: I0114 23:43:45.854161 2473 policy_none.go:49] "None policy: Start" Jan 14 23:43:45.854471 kubelet[2473]: I0114 23:43:45.854454 2473 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 23:43:45.854471 kubelet[2473]: I0114 23:43:45.854475 2473 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 23:43:45.856007 kubelet[2473]: I0114 23:43:45.855934 2473 policy_none.go:47] "Start" Jan 14 23:43:45.862801 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 23:43:45.876726 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 23:43:45.883070 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 23:43:45.894515 kubelet[2473]: E0114 23:43:45.894302 2473 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 23:43:45.897162 kubelet[2473]: I0114 23:43:45.896499 2473 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:43:45.897162 kubelet[2473]: I0114 23:43:45.896530 2473 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:43:45.897569 kubelet[2473]: I0114 23:43:45.897535 2473 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:43:45.899744 kubelet[2473]: E0114 23:43:45.899713 2473 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:43:45.899830 kubelet[2473]: E0114 23:43:45.899764 2473 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" Jan 14 23:43:45.953002 systemd[1]: Created slice kubepods-burstable-pod68687038592293bf0ef01c88c89cd030.slice - libcontainer container kubepods-burstable-pod68687038592293bf0ef01c88c89cd030.slice. Jan 14 23:43:45.966461 kubelet[2473]: E0114 23:43:45.966379 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:45.973372 systemd[1]: Created slice kubepods-burstable-pod9687f236b56b806f01d03b989c661af3.slice - libcontainer container kubepods-burstable-pod9687f236b56b806f01d03b989c661af3.slice. Jan 14 23:43:45.976169 kubelet[2473]: E0114 23:43:45.976125 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:45.978323 systemd[1]: Created slice kubepods-burstable-pod02934806079c707e2d96b6eb44a1010b.slice - libcontainer container kubepods-burstable-pod02934806079c707e2d96b6eb44a1010b.slice. Jan 14 23:43:45.981758 kubelet[2473]: E0114 23:43:45.981720 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.001337 kubelet[2473]: I0114 23:43:46.001246 2473 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.002228 kubelet[2473]: E0114 23:43:46.002165 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.65.210:6443/api/v1/nodes\": dial tcp 46.224.65.210:6443: connect: connection refused" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.014242 kubelet[2473]: I0114 23:43:46.013798 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.014242 kubelet[2473]: I0114 23:43:46.013922 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.014242 kubelet[2473]: I0114 23:43:46.014051 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02934806079c707e2d96b6eb44a1010b-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"02934806079c707e2d96b6eb44a1010b\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.014242 kubelet[2473]: I0114 23:43:46.014116 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.016459 kubelet[2473]: E0114 23:43:46.015955 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.65.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-ec6f9a8ce8?timeout=10s\": dial tcp 46.224.65.210:6443: connect: connection refused" interval="400ms" Jan 14 23:43:46.016459 kubelet[2473]: I0114 23:43:46.015633 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.016459 kubelet[2473]: I0114 23:43:46.016120 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.016459 kubelet[2473]: I0114 23:43:46.016152 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.016459 kubelet[2473]: I0114 23:43:46.016172 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.016650 kubelet[2473]: I0114 23:43:46.016191 2473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.205562 kubelet[2473]: I0114 23:43:46.205508 2473 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.206030 kubelet[2473]: E0114 23:43:46.205971 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.65.210:6443/api/v1/nodes\": dial tcp 46.224.65.210:6443: connect: connection refused" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.271323 containerd[1618]: time="2026-01-14T23:43:46.270918915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8,Uid:68687038592293bf0ef01c88c89cd030,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:46.281231 containerd[1618]: time="2026-01-14T23:43:46.281175688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8,Uid:9687f236b56b806f01d03b989c661af3,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:46.285500 containerd[1618]: time="2026-01-14T23:43:46.285422373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8,Uid:02934806079c707e2d96b6eb44a1010b,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:46.416687 kubelet[2473]: E0114 23:43:46.416598 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.65.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-ec6f9a8ce8?timeout=10s\": dial tcp 46.224.65.210:6443: connect: connection refused" interval="800ms" Jan 14 23:43:46.608841 kubelet[2473]: I0114 23:43:46.608775 2473 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.609433 kubelet[2473]: E0114 23:43:46.609328 2473 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://46.224.65.210:6443/api/v1/nodes\": dial tcp 46.224.65.210:6443: connect: connection refused" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:46.704434 kubelet[2473]: E0114 23:43:46.704330 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://46.224.65.210:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-ec6f9a8ce8&limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 23:43:46.814988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1508787604.mount: Deactivated successfully. Jan 14 23:43:46.822485 containerd[1618]: time="2026-01-14T23:43:46.822389137Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:46.824235 containerd[1618]: time="2026-01-14T23:43:46.824068100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:43:46.828928 containerd[1618]: time="2026-01-14T23:43:46.828807722Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:46.830671 containerd[1618]: time="2026-01-14T23:43:46.830557493Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:43:46.830795 containerd[1618]: time="2026-01-14T23:43:46.830726257Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:46.833021 containerd[1618]: time="2026-01-14T23:43:46.832969165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:46.834580 containerd[1618]: time="2026-01-14T23:43:46.833702115Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:43:46.834823 containerd[1618]: time="2026-01-14T23:43:46.834778309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:46.835819 containerd[1618]: time="2026-01-14T23:43:46.835777459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 560.086416ms" Jan 14 23:43:46.838004 containerd[1618]: time="2026-01-14T23:43:46.837957555Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 553.168457ms" Jan 14 23:43:46.841212 containerd[1618]: time="2026-01-14T23:43:46.841115171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.210769ms" Jan 14 23:43:46.867846 kubelet[2473]: E0114 23:43:46.867146 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://46.224.65.210:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 23:43:46.879012 containerd[1618]: time="2026-01-14T23:43:46.878959262Z" level=info msg="connecting to shim 37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1" address="unix:///run/containerd/s/41a403dc18ca5833b3ac5701d90e071a67f85a28269ba3e3796cc119d8ff30c7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:46.890098 containerd[1618]: time="2026-01-14T23:43:46.889695739Z" level=info msg="connecting to shim e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec" address="unix:///run/containerd/s/c47fad003f49b983c143f8098f28cab5a51fb8c16883dc74a8c8adc24c6215f2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:46.900688 containerd[1618]: time="2026-01-14T23:43:46.900630287Z" level=info msg="connecting to shim a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933" address="unix:///run/containerd/s/51438c94343faa25432423961033a541073ef33d9e92994c67d6ececddba351a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:46.928795 systemd[1]: Started cri-containerd-37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1.scope - libcontainer container 37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1. Jan 14 23:43:46.931778 systemd[1]: Started cri-containerd-e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec.scope - libcontainer container e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec. Jan 14 23:43:46.952843 systemd[1]: Started cri-containerd-a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933.scope - libcontainer container a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933. Jan 14 23:43:46.962000 audit: BPF prog-id=83 op=LOAD Jan 14 23:43:46.964000 audit: BPF prog-id=84 op=LOAD Jan 14 23:43:46.964000 audit: BPF prog-id=85 op=LOAD Jan 14 23:43:46.964000 audit[2562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.964000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.965000 audit: BPF prog-id=85 op=UNLOAD Jan 14 23:43:46.965000 audit[2562]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.965000 audit: BPF prog-id=86 op=LOAD Jan 14 23:43:46.965000 audit[2562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.966000 audit: BPF prog-id=87 op=LOAD Jan 14 23:43:46.966000 audit[2559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.966000 audit: BPF prog-id=87 op=UNLOAD Jan 14 23:43:46.966000 audit[2559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.965000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.966000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.967000 audit: BPF prog-id=88 op=LOAD Jan 14 23:43:46.967000 audit: BPF prog-id=89 op=LOAD Jan 14 23:43:46.967000 audit[2562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.967000 audit: BPF prog-id=88 op=UNLOAD Jan 14 23:43:46.967000 audit[2559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.967000 audit[2562]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.968000 audit: BPF prog-id=86 op=UNLOAD Jan 14 23:43:46.968000 audit[2562]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.967000 audit: BPF prog-id=90 op=LOAD Jan 14 23:43:46.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.967000 audit[2559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.967000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.969000 audit: BPF prog-id=90 op=UNLOAD Jan 14 23:43:46.969000 audit[2559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.969000 audit: BPF prog-id=89 op=UNLOAD Jan 14 23:43:46.969000 audit[2559]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.969000 audit: BPF prog-id=91 op=LOAD Jan 14 23:43:46.969000 audit[2559]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2537 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533303734613838343261336465316663363536383630366138333563 Jan 14 23:43:46.969000 audit: BPF prog-id=92 op=LOAD Jan 14 23:43:46.969000 audit[2562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2521 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337636130636466316435613532376335383737306362323935633562 Jan 14 23:43:46.971000 audit: BPF prog-id=93 op=LOAD Jan 14 23:43:46.973000 audit: BPF prog-id=94 op=LOAD Jan 14 23:43:46.973000 audit[2595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e180 a2=98 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.973000 audit: BPF prog-id=94 op=UNLOAD Jan 14 23:43:46.973000 audit[2595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.973000 audit: BPF prog-id=95 op=LOAD Jan 14 23:43:46.973000 audit[2595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e3e8 a2=98 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.973000 audit: BPF prog-id=96 op=LOAD Jan 14 23:43:46.973000 audit[2595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400017e168 a2=98 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.974000 audit: BPF prog-id=96 op=UNLOAD Jan 14 23:43:46.974000 audit[2595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.974000 audit: BPF prog-id=95 op=UNLOAD Jan 14 23:43:46.974000 audit[2595]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.974000 audit: BPF prog-id=97 op=LOAD Jan 14 23:43:46.974000 audit[2595]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400017e648 a2=98 a3=0 items=0 ppid=2557 pid=2595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.974000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6137333831323261376633343165353436653433393635623964396337 Jan 14 23:43:46.984338 kubelet[2473]: E0114 23:43:46.984279 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://46.224.65.210:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 23:43:47.011149 containerd[1618]: time="2026-01-14T23:43:47.011102903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8,Uid:68687038592293bf0ef01c88c89cd030,Namespace:kube-system,Attempt:0,} returns sandbox id \"37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1\"" Jan 14 23:43:47.023582 containerd[1618]: time="2026-01-14T23:43:47.021702320Z" level=info msg="CreateContainer within sandbox \"37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 23:43:47.028812 containerd[1618]: time="2026-01-14T23:43:47.028762454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8,Uid:9687f236b56b806f01d03b989c661af3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec\"" Jan 14 23:43:47.035241 containerd[1618]: time="2026-01-14T23:43:47.035108349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8,Uid:02934806079c707e2d96b6eb44a1010b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933\"" Jan 14 23:43:47.038211 containerd[1618]: time="2026-01-14T23:43:47.038109005Z" level=info msg="CreateContainer within sandbox \"e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 23:43:47.041489 containerd[1618]: time="2026-01-14T23:43:47.041423017Z" level=info msg="CreateContainer within sandbox \"a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 23:43:47.042065 containerd[1618]: time="2026-01-14T23:43:47.042037335Z" level=info msg="Container e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.050333 containerd[1618]: time="2026-01-14T23:43:47.050287079Z" level=info msg="Container d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.053183 containerd[1618]: time="2026-01-14T23:43:47.053139514Z" level=info msg="CreateContainer within sandbox \"37ca0cdf1d5a527c58770cb295c5b573549988d1e4f189edb1b14ad726bcd1f1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b\"" Jan 14 23:43:47.054932 containerd[1618]: time="2026-01-14T23:43:47.054890183Z" level=info msg="StartContainer for \"e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b\"" Jan 14 23:43:47.056208 containerd[1618]: time="2026-01-14T23:43:47.056172717Z" level=info msg="connecting to shim e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b" address="unix:///run/containerd/s/41a403dc18ca5833b3ac5701d90e071a67f85a28269ba3e3796cc119d8ff30c7" protocol=ttrpc version=3 Jan 14 23:43:47.060264 containerd[1618]: time="2026-01-14T23:43:47.060201607Z" level=info msg="CreateContainer within sandbox \"e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633\"" Jan 14 23:43:47.061357 containerd[1618]: time="2026-01-14T23:43:47.061305451Z" level=info msg="Container 7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.062959 containerd[1618]: time="2026-01-14T23:43:47.062894544Z" level=info msg="StartContainer for \"d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633\"" Jan 14 23:43:47.064842 containerd[1618]: time="2026-01-14T23:43:47.064800432Z" level=info msg="connecting to shim d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633" address="unix:///run/containerd/s/c47fad003f49b983c143f8098f28cab5a51fb8c16883dc74a8c8adc24c6215f2" protocol=ttrpc version=3 Jan 14 23:43:47.074013 containerd[1618]: time="2026-01-14T23:43:47.073932348Z" level=info msg="CreateContainer within sandbox \"a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b\"" Jan 14 23:43:47.079147 containerd[1618]: time="2026-01-14T23:43:47.078541689Z" level=info msg="StartContainer for \"7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b\"" Jan 14 23:43:47.081991 systemd[1]: Started cri-containerd-e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b.scope - libcontainer container e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b. Jan 14 23:43:47.090096 containerd[1618]: time="2026-01-14T23:43:47.090049267Z" level=info msg="connecting to shim 7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b" address="unix:///run/containerd/s/51438c94343faa25432423961033a541073ef33d9e92994c67d6ececddba351a" protocol=ttrpc version=3 Jan 14 23:43:47.092708 systemd[1]: Started cri-containerd-d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633.scope - libcontainer container d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633. Jan 14 23:43:47.108000 audit: BPF prog-id=98 op=LOAD Jan 14 23:43:47.109000 audit: BPF prog-id=99 op=LOAD Jan 14 23:43:47.109000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.110000 audit: BPF prog-id=99 op=UNLOAD Jan 14 23:43:47.110000 audit[2657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.110000 audit: BPF prog-id=100 op=LOAD Jan 14 23:43:47.110000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.111000 audit: BPF prog-id=101 op=LOAD Jan 14 23:43:47.111000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.112000 audit: BPF prog-id=101 op=UNLOAD Jan 14 23:43:47.112000 audit[2657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.112000 audit: BPF prog-id=100 op=UNLOAD Jan 14 23:43:47.112000 audit[2657]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.112000 audit: BPF prog-id=102 op=LOAD Jan 14 23:43:47.112000 audit[2657]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2521 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.112000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531666230646137366637353435323235373766343732303265376630 Jan 14 23:43:47.122886 systemd[1]: Started cri-containerd-7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b.scope - libcontainer container 7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b. Jan 14 23:43:47.126000 audit: BPF prog-id=103 op=LOAD Jan 14 23:43:47.130000 audit: BPF prog-id=104 op=LOAD Jan 14 23:43:47.130000 audit[2663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.130000 audit: BPF prog-id=104 op=UNLOAD Jan 14 23:43:47.130000 audit[2663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.131000 audit: BPF prog-id=105 op=LOAD Jan 14 23:43:47.131000 audit[2663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.131000 audit: BPF prog-id=106 op=LOAD Jan 14 23:43:47.131000 audit[2663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.131000 audit: BPF prog-id=106 op=UNLOAD Jan 14 23:43:47.131000 audit[2663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.131000 audit: BPF prog-id=105 op=UNLOAD Jan 14 23:43:47.131000 audit[2663]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.131000 audit: BPF prog-id=107 op=LOAD Jan 14 23:43:47.131000 audit[2663]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2537 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.131000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433313434623130383564353734353731653334336432386134303638 Jan 14 23:43:47.161130 kubelet[2473]: E0114 23:43:47.160984 2473 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://46.224.65.210:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 46.224.65.210:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 23:43:47.175665 containerd[1618]: time="2026-01-14T23:43:47.175617379Z" level=info msg="StartContainer for \"e1fb0da76f754522577f47202e7f0d91f79e07e81f174f3f47d992e50e53a23b\" returns successfully" Jan 14 23:43:47.178000 audit: BPF prog-id=108 op=LOAD Jan 14 23:43:47.179000 audit: BPF prog-id=109 op=LOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=109 op=UNLOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=110 op=LOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=111 op=LOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=111 op=UNLOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=110 op=UNLOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.179000 audit: BPF prog-id=112 op=LOAD Jan 14 23:43:47.179000 audit[2686]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=2557 pid=2686 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363663333832346166316134646234363235386261343835343235 Jan 14 23:43:47.197088 containerd[1618]: time="2026-01-14T23:43:47.197042843Z" level=info msg="StartContainer for \"d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633\" returns successfully" Jan 14 23:43:47.219045 kubelet[2473]: E0114 23:43:47.218971 2473 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://46.224.65.210:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-ec6f9a8ce8?timeout=10s\": dial tcp 46.224.65.210:6443: connect: connection refused" interval="1.6s" Jan 14 23:43:47.246160 containerd[1618]: time="2026-01-14T23:43:47.246121955Z" level=info msg="StartContainer for \"7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b\" returns successfully" Jan 14 23:43:47.413438 kubelet[2473]: I0114 23:43:47.412423 2473 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:47.856494 kubelet[2473]: E0114 23:43:47.855632 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:47.863817 kubelet[2473]: E0114 23:43:47.863723 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:47.867939 kubelet[2473]: E0114 23:43:47.867731 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:48.872095 kubelet[2473]: E0114 23:43:48.871688 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:48.873184 kubelet[2473]: E0114 23:43:48.872837 2473 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:49.786699 kubelet[2473]: I0114 23:43:49.786649 2473 apiserver.go:52] "Watching apiserver" Jan 14 23:43:49.794731 kubelet[2473]: E0114 23:43:49.794687 2473 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-n-ec6f9a8ce8\" not found" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:49.814497 kubelet[2473]: I0114 23:43:49.813435 2473 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 23:43:49.974113 kubelet[2473]: I0114 23:43:49.974067 2473 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:49.974113 kubelet[2473]: E0114 23:43:49.974121 2473 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-ec6f9a8ce8\": node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" Jan 14 23:43:50.012388 kubelet[2473]: I0114 23:43:50.012263 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:50.043060 kubelet[2473]: E0114 23:43:50.042667 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:50.043060 kubelet[2473]: I0114 23:43:50.042703 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:50.050631 kubelet[2473]: E0114 23:43:50.050586 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:50.050631 kubelet[2473]: I0114 23:43:50.050626 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:50.058922 kubelet[2473]: E0114 23:43:50.058873 2473 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:51.082510 kubelet[2473]: I0114 23:43:51.082457 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:52.376662 systemd[1]: Reload requested from client PID 2761 ('systemctl') (unit session-7.scope)... Jan 14 23:43:52.376690 systemd[1]: Reloading... Jan 14 23:43:52.509486 zram_generator::config[2820]: No configuration found. Jan 14 23:43:52.578902 kubelet[2473]: I0114 23:43:52.578865 2473 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:52.716351 systemd[1]: Reloading finished in 339 ms. Jan 14 23:43:52.766833 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:52.780333 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 23:43:52.781327 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:52.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:52.781886 systemd[1]: kubelet.service: Consumed 885ms CPU time, 121.5M memory peak. Jan 14 23:43:52.782469 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 23:43:52.782574 kernel: audit: type=1131 audit(1768434232.781:402): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:52.787697 kernel: audit: type=1334 audit(1768434232.786:403): prog-id=113 op=LOAD Jan 14 23:43:52.786000 audit: BPF prog-id=113 op=LOAD Jan 14 23:43:52.785750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:52.786000 audit: BPF prog-id=76 op=UNLOAD Jan 14 23:43:52.788503 kernel: audit: type=1334 audit(1768434232.786:404): prog-id=76 op=UNLOAD Jan 14 23:43:52.788561 kernel: audit: type=1334 audit(1768434232.786:405): prog-id=114 op=LOAD Jan 14 23:43:52.786000 audit: BPF prog-id=114 op=LOAD Jan 14 23:43:52.786000 audit: BPF prog-id=115 op=LOAD Jan 14 23:43:52.789489 kernel: audit: type=1334 audit(1768434232.786:406): prog-id=115 op=LOAD Jan 14 23:43:52.786000 audit: BPF prog-id=77 op=UNLOAD Jan 14 23:43:52.786000 audit: BPF prog-id=78 op=UNLOAD Jan 14 23:43:52.791622 kernel: audit: type=1334 audit(1768434232.786:407): prog-id=77 op=UNLOAD Jan 14 23:43:52.791687 kernel: audit: type=1334 audit(1768434232.786:408): prog-id=78 op=UNLOAD Jan 14 23:43:52.788000 audit: BPF prog-id=116 op=LOAD Jan 14 23:43:52.788000 audit: BPF prog-id=70 op=UNLOAD Jan 14 23:43:52.793478 kernel: audit: type=1334 audit(1768434232.788:409): prog-id=116 op=LOAD Jan 14 23:43:52.793521 kernel: audit: type=1334 audit(1768434232.788:410): prog-id=70 op=UNLOAD Jan 14 23:43:52.789000 audit: BPF prog-id=117 op=LOAD Jan 14 23:43:52.789000 audit: BPF prog-id=118 op=LOAD Jan 14 23:43:52.789000 audit: BPF prog-id=71 op=UNLOAD Jan 14 23:43:52.789000 audit: BPF prog-id=72 op=UNLOAD Jan 14 23:43:52.790000 audit: BPF prog-id=119 op=LOAD Jan 14 23:43:52.790000 audit: BPF prog-id=66 op=UNLOAD Jan 14 23:43:52.793000 audit: BPF prog-id=120 op=LOAD Jan 14 23:43:52.793000 audit: BPF prog-id=67 op=UNLOAD Jan 14 23:43:52.793000 audit: BPF prog-id=121 op=LOAD Jan 14 23:43:52.793000 audit: BPF prog-id=122 op=LOAD Jan 14 23:43:52.793000 audit: BPF prog-id=68 op=UNLOAD Jan 14 23:43:52.793000 audit: BPF prog-id=69 op=UNLOAD Jan 14 23:43:52.794000 audit: BPF prog-id=123 op=LOAD Jan 14 23:43:52.794000 audit: BPF prog-id=79 op=UNLOAD Jan 14 23:43:52.795451 kernel: audit: type=1334 audit(1768434232.789:411): prog-id=117 op=LOAD Jan 14 23:43:52.795000 audit: BPF prog-id=124 op=LOAD Jan 14 23:43:52.795000 audit: BPF prog-id=125 op=LOAD Jan 14 23:43:52.795000 audit: BPF prog-id=80 op=UNLOAD Jan 14 23:43:52.795000 audit: BPF prog-id=81 op=UNLOAD Jan 14 23:43:52.796000 audit: BPF prog-id=126 op=LOAD Jan 14 23:43:52.796000 audit: BPF prog-id=82 op=UNLOAD Jan 14 23:43:52.802000 audit: BPF prog-id=127 op=LOAD Jan 14 23:43:52.802000 audit: BPF prog-id=63 op=UNLOAD Jan 14 23:43:52.802000 audit: BPF prog-id=128 op=LOAD Jan 14 23:43:52.802000 audit: BPF prog-id=129 op=LOAD Jan 14 23:43:52.803000 audit: BPF prog-id=64 op=UNLOAD Jan 14 23:43:52.803000 audit: BPF prog-id=65 op=UNLOAD Jan 14 23:43:52.804000 audit: BPF prog-id=130 op=LOAD Jan 14 23:43:52.804000 audit: BPF prog-id=75 op=UNLOAD Jan 14 23:43:52.806000 audit: BPF prog-id=131 op=LOAD Jan 14 23:43:52.806000 audit: BPF prog-id=132 op=LOAD Jan 14 23:43:52.806000 audit: BPF prog-id=73 op=UNLOAD Jan 14 23:43:52.806000 audit: BPF prog-id=74 op=UNLOAD Jan 14 23:43:52.981600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:52.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:52.996313 (kubelet)[2853]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:43:53.053428 kubelet[2853]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:43:53.053428 kubelet[2853]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:53.053428 kubelet[2853]: I0114 23:43:53.052466 2853 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:43:53.063437 kubelet[2853]: I0114 23:43:53.062162 2853 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 14 23:43:53.064444 kubelet[2853]: I0114 23:43:53.063652 2853 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:43:53.064444 kubelet[2853]: I0114 23:43:53.063717 2853 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 14 23:43:53.064444 kubelet[2853]: I0114 23:43:53.063725 2853 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:43:53.064444 kubelet[2853]: I0114 23:43:53.064023 2853 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 23:43:53.065501 kubelet[2853]: I0114 23:43:53.065465 2853 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 23:43:53.068469 kubelet[2853]: I0114 23:43:53.068429 2853 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:43:53.073898 kubelet[2853]: I0114 23:43:53.073847 2853 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:43:53.077429 kubelet[2853]: I0114 23:43:53.077320 2853 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 14 23:43:53.077702 kubelet[2853]: I0114 23:43:53.077596 2853 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:43:53.077998 kubelet[2853]: I0114 23:43:53.077680 2853 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-ec6f9a8ce8","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:43:53.077998 kubelet[2853]: I0114 23:43:53.077990 2853 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:43:53.077998 kubelet[2853]: I0114 23:43:53.078001 2853 container_manager_linux.go:306] "Creating device plugin manager" Jan 14 23:43:53.078188 kubelet[2853]: I0114 23:43:53.078119 2853 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 14 23:43:53.082944 kubelet[2853]: I0114 23:43:53.082690 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:53.083103 kubelet[2853]: I0114 23:43:53.082993 2853 kubelet.go:475] "Attempting to sync node with API server" Jan 14 23:43:53.083103 kubelet[2853]: I0114 23:43:53.083009 2853 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:43:53.083103 kubelet[2853]: I0114 23:43:53.083038 2853 kubelet.go:387] "Adding apiserver pod source" Jan 14 23:43:53.083103 kubelet[2853]: I0114 23:43:53.083061 2853 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:43:53.091421 kubelet[2853]: I0114 23:43:53.089566 2853 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:43:53.091421 kubelet[2853]: I0114 23:43:53.090185 2853 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 23:43:53.091421 kubelet[2853]: I0114 23:43:53.090212 2853 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 14 23:43:53.096518 kubelet[2853]: I0114 23:43:53.095286 2853 server.go:1262] "Started kubelet" Jan 14 23:43:53.096518 kubelet[2853]: I0114 23:43:53.095989 2853 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:43:53.096518 kubelet[2853]: I0114 23:43:53.096096 2853 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:43:53.100001 kubelet[2853]: I0114 23:43:53.098885 2853 server.go:310] "Adding debug handlers to kubelet server" Jan 14 23:43:53.106643 kubelet[2853]: I0114 23:43:53.105909 2853 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:43:53.107410 kubelet[2853]: I0114 23:43:53.106825 2853 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 14 23:43:53.107713 kubelet[2853]: I0114 23:43:53.107692 2853 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:43:53.107921 kubelet[2853]: I0114 23:43:53.107373 2853 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 14 23:43:53.108023 kubelet[2853]: E0114 23:43:53.107759 2853 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-ec6f9a8ce8\" not found" Jan 14 23:43:53.108807 kubelet[2853]: I0114 23:43:53.108377 2853 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:43:53.111582 kubelet[2853]: I0114 23:43:53.107362 2853 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 14 23:43:53.112283 kubelet[2853]: I0114 23:43:53.112267 2853 reconciler.go:29] "Reconciler: start to sync state" Jan 14 23:43:53.112381 kubelet[2853]: I0114 23:43:53.112274 2853 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 14 23:43:53.115080 kubelet[2853]: I0114 23:43:53.115039 2853 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 14 23:43:53.115303 kubelet[2853]: I0114 23:43:53.115236 2853 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 14 23:43:53.115303 kubelet[2853]: I0114 23:43:53.115270 2853 kubelet.go:2427] "Starting kubelet main sync loop" Jan 14 23:43:53.115544 kubelet[2853]: E0114 23:43:53.115482 2853 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:43:53.134111 kubelet[2853]: I0114 23:43:53.133873 2853 factory.go:223] Registration of the systemd container factory successfully Jan 14 23:43:53.134111 kubelet[2853]: I0114 23:43:53.134009 2853 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:43:53.146312 kubelet[2853]: I0114 23:43:53.146276 2853 factory.go:223] Registration of the containerd container factory successfully Jan 14 23:43:53.212548 kubelet[2853]: I0114 23:43:53.212498 2853 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:43:53.212548 kubelet[2853]: I0114 23:43:53.212523 2853 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:43:53.212548 kubelet[2853]: I0114 23:43:53.212546 2853 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:53.212770 kubelet[2853]: I0114 23:43:53.212726 2853 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 23:43:53.212770 kubelet[2853]: I0114 23:43:53.212737 2853 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 23:43:53.212770 kubelet[2853]: I0114 23:43:53.212754 2853 policy_none.go:49] "None policy: Start" Jan 14 23:43:53.212770 kubelet[2853]: I0114 23:43:53.212763 2853 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 14 23:43:53.212770 kubelet[2853]: I0114 23:43:53.212772 2853 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 14 23:43:53.212914 kubelet[2853]: I0114 23:43:53.212879 2853 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 14 23:43:53.212914 kubelet[2853]: I0114 23:43:53.212888 2853 policy_none.go:47] "Start" Jan 14 23:43:53.216514 kubelet[2853]: E0114 23:43:53.216478 2853 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 14 23:43:53.219150 kubelet[2853]: E0114 23:43:53.218562 2853 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 23:43:53.219150 kubelet[2853]: I0114 23:43:53.218882 2853 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:43:53.219150 kubelet[2853]: I0114 23:43:53.218912 2853 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:43:53.220608 kubelet[2853]: I0114 23:43:53.220322 2853 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:43:53.224961 kubelet[2853]: E0114 23:43:53.224933 2853 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:43:53.332791 kubelet[2853]: I0114 23:43:53.332518 2853 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.343103 kubelet[2853]: I0114 23:43:53.343041 2853 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.343229 kubelet[2853]: I0114 23:43:53.343139 2853 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.417874 kubelet[2853]: I0114 23:43:53.417804 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.418851 kubelet[2853]: I0114 23:43:53.418816 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.419699 kubelet[2853]: I0114 23:43:53.419154 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.427065 kubelet[2853]: E0114 23:43:53.427016 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.432867 kubelet[2853]: E0114 23:43:53.432737 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514371 kubelet[2853]: I0114 23:43:53.514055 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514371 kubelet[2853]: I0114 23:43:53.514164 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514371 kubelet[2853]: I0114 23:43:53.514197 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514371 kubelet[2853]: I0114 23:43:53.514213 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514371 kubelet[2853]: I0114 23:43:53.514231 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514628 kubelet[2853]: I0114 23:43:53.514255 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/02934806079c707e2d96b6eb44a1010b-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"02934806079c707e2d96b6eb44a1010b\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514628 kubelet[2853]: I0114 23:43:53.514278 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514628 kubelet[2853]: I0114 23:43:53.514294 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9687f236b56b806f01d03b989c661af3-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"9687f236b56b806f01d03b989c661af3\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:53.514628 kubelet[2853]: I0114 23:43:53.514322 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/68687038592293bf0ef01c88c89cd030-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" (UID: \"68687038592293bf0ef01c88c89cd030\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:54.084600 kubelet[2853]: I0114 23:43:54.084550 2853 apiserver.go:52] "Watching apiserver" Jan 14 23:43:54.108517 kubelet[2853]: I0114 23:43:54.108455 2853 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 14 23:43:54.203733 kubelet[2853]: I0114 23:43:54.203696 2853 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:54.214777 kubelet[2853]: E0114 23:43:54.214577 2853 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:43:54.247162 kubelet[2853]: I0114 23:43:54.247088 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-n-ec6f9a8ce8" podStartSLOduration=2.246827199 podStartE2EDuration="2.246827199s" podCreationTimestamp="2026-01-14 23:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:54.233938254 +0000 UTC m=+1.231466361" watchObservedRunningTime="2026-01-14 23:43:54.246827199 +0000 UTC m=+1.244355306" Jan 14 23:43:54.248316 kubelet[2853]: I0114 23:43:54.247219 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-n-ec6f9a8ce8" podStartSLOduration=3.247214636 podStartE2EDuration="3.247214636s" podCreationTimestamp="2026-01-14 23:43:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:54.246844122 +0000 UTC m=+1.244372229" watchObservedRunningTime="2026-01-14 23:43:54.247214636 +0000 UTC m=+1.244742743" Jan 14 23:43:54.263987 kubelet[2853]: I0114 23:43:54.263912 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-ec6f9a8ce8" podStartSLOduration=1.263743459 podStartE2EDuration="1.263743459s" podCreationTimestamp="2026-01-14 23:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:54.26364704 +0000 UTC m=+1.261175147" watchObservedRunningTime="2026-01-14 23:43:54.263743459 +0000 UTC m=+1.261271566" Jan 14 23:43:58.277243 kubelet[2853]: I0114 23:43:58.277091 2853 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 23:43:58.279779 containerd[1618]: time="2026-01-14T23:43:58.279492425Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 23:43:58.280613 kubelet[2853]: I0114 23:43:58.280579 2853 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 23:43:59.479778 systemd[1]: Created slice kubepods-besteffort-pod7722cab6_ddf7_413e_a2af_26ab87cfb930.slice - libcontainer container kubepods-besteffort-pod7722cab6_ddf7_413e_a2af_26ab87cfb930.slice. Jan 14 23:43:59.562156 kubelet[2853]: I0114 23:43:59.561221 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7722cab6-ddf7-413e-a2af-26ab87cfb930-xtables-lock\") pod \"kube-proxy-7t6zr\" (UID: \"7722cab6-ddf7-413e-a2af-26ab87cfb930\") " pod="kube-system/kube-proxy-7t6zr" Jan 14 23:43:59.563360 kubelet[2853]: I0114 23:43:59.562972 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7722cab6-ddf7-413e-a2af-26ab87cfb930-lib-modules\") pod \"kube-proxy-7t6zr\" (UID: \"7722cab6-ddf7-413e-a2af-26ab87cfb930\") " pod="kube-system/kube-proxy-7t6zr" Jan 14 23:43:59.563360 kubelet[2853]: I0114 23:43:59.563183 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n8ps\" (UniqueName: \"kubernetes.io/projected/7722cab6-ddf7-413e-a2af-26ab87cfb930-kube-api-access-4n8ps\") pod \"kube-proxy-7t6zr\" (UID: \"7722cab6-ddf7-413e-a2af-26ab87cfb930\") " pod="kube-system/kube-proxy-7t6zr" Jan 14 23:43:59.563360 kubelet[2853]: I0114 23:43:59.563304 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7722cab6-ddf7-413e-a2af-26ab87cfb930-kube-proxy\") pod \"kube-proxy-7t6zr\" (UID: \"7722cab6-ddf7-413e-a2af-26ab87cfb930\") " pod="kube-system/kube-proxy-7t6zr" Jan 14 23:43:59.616895 systemd[1]: Created slice kubepods-besteffort-poddabdf5b9_a466_4907_a053_0354ef9c1c8a.slice - libcontainer container kubepods-besteffort-poddabdf5b9_a466_4907_a053_0354ef9c1c8a.slice. Jan 14 23:43:59.664274 kubelet[2853]: I0114 23:43:59.664180 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dabdf5b9-a466-4907-a053-0354ef9c1c8a-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-9hmtp\" (UID: \"dabdf5b9-a466-4907-a053-0354ef9c1c8a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9hmtp" Jan 14 23:43:59.664274 kubelet[2853]: I0114 23:43:59.664271 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsfm\" (UniqueName: \"kubernetes.io/projected/dabdf5b9-a466-4907-a053-0354ef9c1c8a-kube-api-access-gwsfm\") pod \"tigera-operator-65cdcdfd6d-9hmtp\" (UID: \"dabdf5b9-a466-4907-a053-0354ef9c1c8a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9hmtp" Jan 14 23:43:59.796054 containerd[1618]: time="2026-01-14T23:43:59.795169266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7t6zr,Uid:7722cab6-ddf7-413e-a2af-26ab87cfb930,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:59.822379 containerd[1618]: time="2026-01-14T23:43:59.822069725Z" level=info msg="connecting to shim b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3" address="unix:///run/containerd/s/81c84f2da39d709f044397e4bdbddfb1c1de8d1d7fe99712cd12ea4b1e18e13d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:59.858862 systemd[1]: Started cri-containerd-b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3.scope - libcontainer container b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3. Jan 14 23:43:59.874000 audit: BPF prog-id=133 op=LOAD Jan 14 23:43:59.876611 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 23:43:59.876710 kernel: audit: type=1334 audit(1768434239.874:444): prog-id=133 op=LOAD Jan 14 23:43:59.882048 kernel: audit: type=1334 audit(1768434239.876:445): prog-id=134 op=LOAD Jan 14 23:43:59.882158 kernel: audit: type=1300 audit(1768434239.876:445): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.876000 audit: BPF prog-id=134 op=LOAD Jan 14 23:43:59.876000 audit[2921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.884787 kernel: audit: type=1327 audit(1768434239.876:445): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.877000 audit: BPF prog-id=134 op=UNLOAD Jan 14 23:43:59.885638 kernel: audit: type=1334 audit(1768434239.877:446): prog-id=134 op=UNLOAD Jan 14 23:43:59.877000 audit[2921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.888248 kernel: audit: type=1300 audit(1768434239.877:446): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.891012 kernel: audit: type=1327 audit(1768434239.877:446): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.891845 kernel: audit: type=1334 audit(1768434239.877:447): prog-id=135 op=LOAD Jan 14 23:43:59.877000 audit: BPF prog-id=135 op=LOAD Jan 14 23:43:59.877000 audit[2921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.894435 kernel: audit: type=1300 audit(1768434239.877:447): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.897061 kernel: audit: type=1327 audit(1768434239.877:447): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.877000 audit: BPF prog-id=136 op=LOAD Jan 14 23:43:59.877000 audit[2921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.880000 audit: BPF prog-id=136 op=UNLOAD Jan 14 23:43:59.880000 audit[2921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.880000 audit: BPF prog-id=135 op=UNLOAD Jan 14 23:43:59.880000 audit[2921]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.880000 audit: BPF prog-id=137 op=LOAD Jan 14 23:43:59.880000 audit[2921]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2911 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:59.880000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230366431353731626262326431383834373166363231386637393963 Jan 14 23:43:59.913123 containerd[1618]: time="2026-01-14T23:43:59.913003432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7t6zr,Uid:7722cab6-ddf7-413e-a2af-26ab87cfb930,Namespace:kube-system,Attempt:0,} returns sandbox id \"b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3\"" Jan 14 23:43:59.922830 containerd[1618]: time="2026-01-14T23:43:59.922678117Z" level=info msg="CreateContainer within sandbox \"b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 23:43:59.925134 containerd[1618]: time="2026-01-14T23:43:59.925088438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9hmtp,Uid:dabdf5b9-a466-4907-a053-0354ef9c1c8a,Namespace:tigera-operator,Attempt:0,}" Jan 14 23:43:59.936105 containerd[1618]: time="2026-01-14T23:43:59.936004268Z" level=info msg="Container 83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:59.955063 containerd[1618]: time="2026-01-14T23:43:59.954154900Z" level=info msg="CreateContainer within sandbox \"b06d1571bbb2d188471f6218f799c2b3bc12294970558a4b6bed776faf833bf3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108\"" Jan 14 23:43:59.956726 containerd[1618]: time="2026-01-14T23:43:59.955753299Z" level=info msg="StartContainer for \"83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108\"" Jan 14 23:43:59.958384 containerd[1618]: time="2026-01-14T23:43:59.958342766Z" level=info msg="connecting to shim 83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108" address="unix:///run/containerd/s/81c84f2da39d709f044397e4bdbddfb1c1de8d1d7fe99712cd12ea4b1e18e13d" protocol=ttrpc version=3 Jan 14 23:43:59.967471 containerd[1618]: time="2026-01-14T23:43:59.967343671Z" level=info msg="connecting to shim 09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996" address="unix:///run/containerd/s/5b43a7691ca9170cb4cbef392dfb23cb3c990333593b22a7da13cf42eee60baa" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:59.991879 systemd[1]: Started cri-containerd-83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108.scope - libcontainer container 83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108. Jan 14 23:44:00.005652 systemd[1]: Started cri-containerd-09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996.scope - libcontainer container 09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996. Jan 14 23:44:00.023000 audit: BPF prog-id=138 op=LOAD Jan 14 23:44:00.024000 audit: BPF prog-id=139 op=LOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=139 op=UNLOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=140 op=LOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=141 op=LOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=141 op=UNLOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=140 op=UNLOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.024000 audit: BPF prog-id=142 op=LOAD Jan 14 23:44:00.024000 audit[2976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2955 pid=2976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039623631626462333534653732626537656638363762366232396334 Jan 14 23:44:00.036000 audit: BPF prog-id=143 op=LOAD Jan 14 23:44:00.036000 audit[2952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2911 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833363439633436303062396533643731323063653961353464393731 Jan 14 23:44:00.036000 audit: BPF prog-id=144 op=LOAD Jan 14 23:44:00.036000 audit[2952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2911 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833363439633436303062396533643731323063653961353464393731 Jan 14 23:44:00.036000 audit: BPF prog-id=144 op=UNLOAD Jan 14 23:44:00.036000 audit[2952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833363439633436303062396533643731323063653961353464393731 Jan 14 23:44:00.036000 audit: BPF prog-id=143 op=UNLOAD Jan 14 23:44:00.036000 audit[2952]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2911 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833363439633436303062396533643731323063653961353464393731 Jan 14 23:44:00.037000 audit: BPF prog-id=145 op=LOAD Jan 14 23:44:00.037000 audit[2952]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2911 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833363439633436303062396533643731323063653961353464393731 Jan 14 23:44:00.067927 containerd[1618]: time="2026-01-14T23:44:00.067839442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9hmtp,Uid:dabdf5b9-a466-4907-a053-0354ef9c1c8a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996\"" Jan 14 23:44:00.073307 containerd[1618]: time="2026-01-14T23:44:00.073253008Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 23:44:00.084194 containerd[1618]: time="2026-01-14T23:44:00.084084981Z" level=info msg="StartContainer for \"83649c4600b9e3d7120ce9a54d9718c2255737041fed8f25bff846ea50207108\" returns successfully" Jan 14 23:44:00.247590 kubelet[2853]: I0114 23:44:00.246946 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7t6zr" podStartSLOduration=1.246924267 podStartE2EDuration="1.246924267s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:00.246496967 +0000 UTC m=+7.244025074" watchObservedRunningTime="2026-01-14 23:44:00.246924267 +0000 UTC m=+7.244452374" Jan 14 23:44:00.337000 audit[3054]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.337000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcec1dfa0 a2=0 a3=1 items=0 ppid=2990 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.337000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:44:00.339000 audit[3056]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.339000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb152d10 a2=0 a3=1 items=0 ppid=2990 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:44:00.340000 audit[3057]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.340000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7bdfb50 a2=0 a3=1 items=0 ppid=2990 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:44:00.342000 audit[3059]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.342000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec4260e0 a2=0 a3=1 items=0 ppid=2990 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:44:00.347000 audit[3061]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3061 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.347000 audit[3061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef99a4a0 a2=0 a3=1 items=0 ppid=2990 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.347000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:44:00.350000 audit[3062]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.350000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2846640 a2=0 a3=1 items=0 ppid=2990 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.350000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:44:00.450000 audit[3063]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.450000 audit[3063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff7c2cc80 a2=0 a3=1 items=0 ppid=2990 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:44:00.454000 audit[3065]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.454000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe1d9d7d0 a2=0 a3=1 items=0 ppid=2990 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.454000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Jan 14 23:44:00.459000 audit[3068]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.459000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffefde78e0 a2=0 a3=1 items=0 ppid=2990 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 23:44:00.460000 audit[3069]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.460000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbf11640 a2=0 a3=1 items=0 ppid=2990 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.460000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:44:00.463000 audit[3071]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.463000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff951e900 a2=0 a3=1 items=0 ppid=2990 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:44:00.465000 audit[3072]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.465000 audit[3072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc36b230 a2=0 a3=1 items=0 ppid=2990 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.465000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:44:00.468000 audit[3074]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.468000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe140dec0 a2=0 a3=1 items=0 ppid=2990 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.468000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.476000 audit[3077]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.476000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff0f7e000 a2=0 a3=1 items=0 ppid=2990 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.476000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.479000 audit[3078]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.479000 audit[3078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc12a6f90 a2=0 a3=1 items=0 ppid=2990 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:44:00.484000 audit[3080]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.484000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc15a7ea0 a2=0 a3=1 items=0 ppid=2990 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:44:00.486000 audit[3081]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3081 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.486000 audit[3081]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda69aef0 a2=0 a3=1 items=0 ppid=2990 pid=3081 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.491000 audit[3083]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.491000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc66300f0 a2=0 a3=1 items=0 ppid=2990 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Jan 14 23:44:00.497000 audit[3086]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.497000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1cf0450 a2=0 a3=1 items=0 ppid=2990 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 23:44:00.503000 audit[3089]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.503000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed86e840 a2=0 a3=1 items=0 ppid=2990 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 23:44:00.505000 audit[3090]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.505000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffb7b93d0 a2=0 a3=1 items=0 ppid=2990 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:44:00.509000 audit[3092]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.509000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd1d532f0 a2=0 a3=1 items=0 ppid=2990 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.516000 audit[3095]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.516000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda9ae630 a2=0 a3=1 items=0 ppid=2990 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.520000 audit[3096]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.520000 audit[3096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedf6f3b0 a2=0 a3=1 items=0 ppid=2990 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.520000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:44:00.525000 audit[3098]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.525000 audit[3098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe24055e0 a2=0 a3=1 items=0 ppid=2990 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.525000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:44:00.554000 audit[3104]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:00.554000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe64f0780 a2=0 a3=1 items=0 ppid=2990 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.564000 audit[3104]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:00.564000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe64f0780 a2=0 a3=1 items=0 ppid=2990 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.564000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.566000 audit[3109]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.566000 audit[3109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe5bc6fd0 a2=0 a3=1 items=0 ppid=2990 pid=3109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:44:00.571000 audit[3111]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.571000 audit[3111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd809c090 a2=0 a3=1 items=0 ppid=2990 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Jan 14 23:44:00.577000 audit[3114]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.577000 audit[3114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeae67a00 a2=0 a3=1 items=0 ppid=2990 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Jan 14 23:44:00.579000 audit[3115]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.579000 audit[3115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3ce2890 a2=0 a3=1 items=0 ppid=2990 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:44:00.583000 audit[3117]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.583000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcfce2900 a2=0 a3=1 items=0 ppid=2990 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:44:00.585000 audit[3118]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.585000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda0b9130 a2=0 a3=1 items=0 ppid=2990 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:44:00.590000 audit[3120]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.590000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdea1f980 a2=0 a3=1 items=0 ppid=2990 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.590000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.599000 audit[3123]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.599000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe272dcf0 a2=0 a3=1 items=0 ppid=2990 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.599000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.601000 audit[3124]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.601000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6829e40 a2=0 a3=1 items=0 ppid=2990 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:44:00.605000 audit[3126]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.605000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdb651950 a2=0 a3=1 items=0 ppid=2990 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.605000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:44:00.606000 audit[3127]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.606000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd752b80 a2=0 a3=1 items=0 ppid=2990 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.606000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.610000 audit[3129]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3129 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.610000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe2bd0170 a2=0 a3=1 items=0 ppid=2990 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.610000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Jan 14 23:44:00.615000 audit[3132]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.615000 audit[3132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe87a52c0 a2=0 a3=1 items=0 ppid=2990 pid=3132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.615000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Jan 14 23:44:00.621000 audit[3135]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3135 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.621000 audit[3135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff895e590 a2=0 a3=1 items=0 ppid=2990 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Jan 14 23:44:00.622000 audit[3136]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.622000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffde937b30 a2=0 a3=1 items=0 ppid=2990 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.622000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:44:00.627000 audit[3138]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3138 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.627000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffa294820 a2=0 a3=1 items=0 ppid=2990 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.634000 audit[3141]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3141 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.634000 audit[3141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffee40cdf0 a2=0 a3=1 items=0 ppid=2990 pid=3141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.636000 audit[3142]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.636000 audit[3142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7bf84e0 a2=0 a3=1 items=0 ppid=2990 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:44:00.640000 audit[3144]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.640000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffe68a22a0 a2=0 a3=1 items=0 ppid=2990 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.640000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:44:00.641000 audit[3145]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.641000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3222fc0 a2=0 a3=1 items=0 ppid=2990 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.646000 audit[3147]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3147 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.646000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffa4afb30 a2=0 a3=1 items=0 ppid=2990 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.646000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:44:00.655000 audit[3150]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3150 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.655000 audit[3150]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe3d94470 a2=0 a3=1 items=0 ppid=2990 pid=3150 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:44:00.659000 audit[3152]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:44:00.659000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffff6db150 a2=0 a3=1 items=0 ppid=2990 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.659000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.660000 audit[3152]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3152 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:44:00.660000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffff6db150 a2=0 a3=1 items=0 ppid=2990 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.660000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:02.858952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765976631.mount: Deactivated successfully. Jan 14 23:44:03.320482 containerd[1618]: time="2026-01-14T23:44:03.320370682Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:03.323328 containerd[1618]: time="2026-01-14T23:44:03.323192822Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 23:44:03.323727 containerd[1618]: time="2026-01-14T23:44:03.323684442Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:03.327669 containerd[1618]: time="2026-01-14T23:44:03.327617156Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:03.328984 containerd[1618]: time="2026-01-14T23:44:03.328551149Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.254858279s" Jan 14 23:44:03.328984 containerd[1618]: time="2026-01-14T23:44:03.328594354Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 23:44:03.333632 containerd[1618]: time="2026-01-14T23:44:03.333588557Z" level=info msg="CreateContainer within sandbox \"09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 23:44:03.347512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094917920.mount: Deactivated successfully. Jan 14 23:44:03.350897 containerd[1618]: time="2026-01-14T23:44:03.350846000Z" level=info msg="Container a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:03.360365 containerd[1618]: time="2026-01-14T23:44:03.360302262Z" level=info msg="CreateContainer within sandbox \"09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789\"" Jan 14 23:44:03.363124 containerd[1618]: time="2026-01-14T23:44:03.362416397Z" level=info msg="StartContainer for \"a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789\"" Jan 14 23:44:03.363801 containerd[1618]: time="2026-01-14T23:44:03.363770280Z" level=info msg="connecting to shim a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789" address="unix:///run/containerd/s/5b43a7691ca9170cb4cbef392dfb23cb3c990333593b22a7da13cf42eee60baa" protocol=ttrpc version=3 Jan 14 23:44:03.386797 systemd[1]: Started cri-containerd-a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789.scope - libcontainer container a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789. Jan 14 23:44:03.400000 audit: BPF prog-id=146 op=LOAD Jan 14 23:44:03.401000 audit: BPF prog-id=147 op=LOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=147 op=UNLOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=148 op=LOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=149 op=LOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=149 op=UNLOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=148 op=UNLOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.401000 audit: BPF prog-id=150 op=LOAD Jan 14 23:44:03.401000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2955 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:03.401000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133663963663335373334636666376166313639656463333362636362 Jan 14 23:44:03.422519 containerd[1618]: time="2026-01-14T23:44:03.422444243Z" level=info msg="StartContainer for \"a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789\" returns successfully" Jan 14 23:44:03.831602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2642635202.mount: Deactivated successfully. Jan 14 23:44:04.261542 kubelet[2853]: I0114 23:44:04.261369 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-9hmtp" podStartSLOduration=2.004256537 podStartE2EDuration="5.261345073s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="2026-01-14 23:44:00.072341919 +0000 UTC m=+7.069869986" lastFinishedPulling="2026-01-14 23:44:03.329430415 +0000 UTC m=+10.326958522" observedRunningTime="2026-01-14 23:44:04.259604753 +0000 UTC m=+11.257132980" watchObservedRunningTime="2026-01-14 23:44:04.261345073 +0000 UTC m=+11.258873180" Jan 14 23:44:09.977000 audit[1889]: USER_END pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:09.978695 sudo[1889]: pam_unix(sudo:session): session closed for user root Jan 14 23:44:09.981198 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 23:44:09.981271 kernel: audit: type=1106 audit(1768434249.977:524): pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:09.977000 audit[1889]: CRED_DISP pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:09.985131 kernel: audit: type=1104 audit(1768434249.977:525): pid=1889 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:10.077744 sshd[1888]: Connection closed by 68.220.241.50 port 36216 Jan 14 23:44:10.079363 sshd-session[1885]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:10.081000 audit[1885]: USER_END pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:10.082000 audit[1885]: CRED_DISP pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:10.088537 kernel: audit: type=1106 audit(1768434250.081:526): pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:10.088610 kernel: audit: type=1104 audit(1768434250.082:527): pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:10.088178 systemd-logind[1597]: Session 7 logged out. Waiting for processes to exit. Jan 14 23:44:10.090472 systemd[1]: sshd@6-46.224.65.210:22-68.220.241.50:36216.service: Deactivated successfully. Jan 14 23:44:10.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-46.224.65.210:22-68.220.241.50:36216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:10.096041 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 23:44:10.096439 kernel: audit: type=1131 audit(1768434250.091:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-46.224.65.210:22-68.220.241.50:36216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:10.100150 systemd[1]: session-7.scope: Consumed 7.360s CPU time, 219.5M memory peak. Jan 14 23:44:10.104663 systemd-logind[1597]: Removed session 7. Jan 14 23:44:11.611000 audit[3246]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:11.611000 audit[3246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffee6edb0 a2=0 a3=1 items=0 ppid=2990 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:11.617197 kernel: audit: type=1325 audit(1768434251.611:529): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:11.617300 kernel: audit: type=1300 audit(1768434251.611:529): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffee6edb0 a2=0 a3=1 items=0 ppid=2990 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:11.611000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:11.623496 kernel: audit: type=1327 audit(1768434251.611:529): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:11.623000 audit[3246]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:11.626416 kernel: audit: type=1325 audit(1768434251.623:530): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:11.623000 audit[3246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffee6edb0 a2=0 a3=1 items=0 ppid=2990 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:11.623000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:11.632479 kernel: audit: type=1300 audit(1768434251.623:530): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffee6edb0 a2=0 a3=1 items=0 ppid=2990 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:12.649000 audit[3248]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:12.649000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd6d83450 a2=0 a3=1 items=0 ppid=2990 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:12.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:12.655000 audit[3248]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:12.655000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd6d83450 a2=0 a3=1 items=0 ppid=2990 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:12.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.673000 audit[3251]: NETFILTER_CFG table=filter:109 family=2 entries=16 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.675596 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 23:44:15.675754 kernel: audit: type=1325 audit(1768434255.673:533): table=filter:109 family=2 entries=16 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.673000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffde289310 a2=0 a3=1 items=0 ppid=2990 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.679145 kernel: audit: type=1300 audit(1768434255.673:533): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffde289310 a2=0 a3=1 items=0 ppid=2990 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.682584 kernel: audit: type=1327 audit(1768434255.673:533): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.681000 audit[3251]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.683999 kernel: audit: type=1325 audit(1768434255.681:534): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.681000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde289310 a2=0 a3=1 items=0 ppid=2990 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.699426 kernel: audit: type=1300 audit(1768434255.681:534): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde289310 a2=0 a3=1 items=0 ppid=2990 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.699556 kernel: audit: type=1327 audit(1768434255.681:534): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:16.808000 audit[3253]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:16.808000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc60c1d80 a2=0 a3=1 items=0 ppid=2990 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:16.812585 kernel: audit: type=1325 audit(1768434256.808:535): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:16.812664 kernel: audit: type=1300 audit(1768434256.808:535): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc60c1d80 a2=0 a3=1 items=0 ppid=2990 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:16.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:16.814615 kernel: audit: type=1327 audit(1768434256.808:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:16.815000 audit[3253]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:16.815000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc60c1d80 a2=0 a3=1 items=0 ppid=2990 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:16.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:16.818450 kernel: audit: type=1325 audit(1768434256.815:536): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.439000 audit[3255]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.439000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffff9f86d0 a2=0 a3=1 items=0 ppid=2990 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.443000 audit[3255]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.443000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffff9f86d0 a2=0 a3=1 items=0 ppid=2990 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.443000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.479857 systemd[1]: Created slice kubepods-besteffort-pod6513792b_a7c8_4e59_9a6c_7805d0acef34.slice - libcontainer container kubepods-besteffort-pod6513792b_a7c8_4e59_9a6c_7805d0acef34.slice. Jan 14 23:44:19.486759 kubelet[2853]: E0114 23:44:19.485127 2853 reflector.go:205] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ci-4515-1-0-n-ec6f9a8ce8\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515-1-0-n-ec6f9a8ce8' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Jan 14 23:44:19.487375 kubelet[2853]: E0114 23:44:19.485459 2853 status_manager.go:1018] "Failed to get status for pod" err="pods \"calico-typha-86448cd66-mtf2t\" is forbidden: User \"system:node:ci-4515-1-0-n-ec6f9a8ce8\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515-1-0-n-ec6f9a8ce8' and this object" podUID="6513792b-a7c8-4e59-9a6c-7805d0acef34" pod="calico-system/calico-typha-86448cd66-mtf2t" Jan 14 23:44:19.595835 kubelet[2853]: I0114 23:44:19.595789 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldlf\" (UniqueName: \"kubernetes.io/projected/6513792b-a7c8-4e59-9a6c-7805d0acef34-kube-api-access-lldlf\") pod \"calico-typha-86448cd66-mtf2t\" (UID: \"6513792b-a7c8-4e59-9a6c-7805d0acef34\") " pod="calico-system/calico-typha-86448cd66-mtf2t" Jan 14 23:44:19.596187 kubelet[2853]: I0114 23:44:19.596156 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6513792b-a7c8-4e59-9a6c-7805d0acef34-tigera-ca-bundle\") pod \"calico-typha-86448cd66-mtf2t\" (UID: \"6513792b-a7c8-4e59-9a6c-7805d0acef34\") " pod="calico-system/calico-typha-86448cd66-mtf2t" Jan 14 23:44:19.596349 kubelet[2853]: I0114 23:44:19.596327 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6513792b-a7c8-4e59-9a6c-7805d0acef34-typha-certs\") pod \"calico-typha-86448cd66-mtf2t\" (UID: \"6513792b-a7c8-4e59-9a6c-7805d0acef34\") " pod="calico-system/calico-typha-86448cd66-mtf2t" Jan 14 23:44:19.651467 systemd[1]: Created slice kubepods-besteffort-pod636768bc_0f1a_4fcf_bb35_51ee62af7700.slice - libcontainer container kubepods-besteffort-pod636768bc_0f1a_4fcf_bb35_51ee62af7700.slice. Jan 14 23:44:19.697641 kubelet[2853]: I0114 23:44:19.697479 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-cni-log-dir\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697641 kubelet[2853]: I0114 23:44:19.697538 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-policysync\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697641 kubelet[2853]: I0114 23:44:19.697576 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-flexvol-driver-host\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697641 kubelet[2853]: I0114 23:44:19.697601 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-var-lib-calico\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697641 kubelet[2853]: I0114 23:44:19.697628 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-var-run-calico\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697857 kubelet[2853]: I0114 23:44:19.697652 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-cni-bin-dir\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697857 kubelet[2853]: I0114 23:44:19.697674 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/636768bc-0f1a-4fcf-bb35-51ee62af7700-tigera-ca-bundle\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697857 kubelet[2853]: I0114 23:44:19.697695 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-xtables-lock\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697857 kubelet[2853]: I0114 23:44:19.697721 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9fb5\" (UniqueName: \"kubernetes.io/projected/636768bc-0f1a-4fcf-bb35-51ee62af7700-kube-api-access-d9fb5\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697857 kubelet[2853]: I0114 23:44:19.697742 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-cni-net-dir\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697957 kubelet[2853]: I0114 23:44:19.697762 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/636768bc-0f1a-4fcf-bb35-51ee62af7700-lib-modules\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.697957 kubelet[2853]: I0114 23:44:19.697799 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/636768bc-0f1a-4fcf-bb35-51ee62af7700-node-certs\") pod \"calico-node-9ndds\" (UID: \"636768bc-0f1a-4fcf-bb35-51ee62af7700\") " pod="calico-system/calico-node-9ndds" Jan 14 23:44:19.812429 kubelet[2853]: E0114 23:44:19.812159 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.812429 kubelet[2853]: W0114 23:44:19.812191 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.812429 kubelet[2853]: E0114 23:44:19.812218 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.815809 kubelet[2853]: E0114 23:44:19.812673 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.815809 kubelet[2853]: W0114 23:44:19.812689 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.815809 kubelet[2853]: E0114 23:44:19.812707 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.818128 kubelet[2853]: E0114 23:44:19.816213 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.818419 kubelet[2853]: W0114 23:44:19.818364 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.820430 kubelet[2853]: E0114 23:44:19.818926 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.821584 kubelet[2853]: E0114 23:44:19.821497 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.821810 kubelet[2853]: W0114 23:44:19.821789 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.821984 kubelet[2853]: E0114 23:44:19.821886 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.822591 kubelet[2853]: E0114 23:44:19.822448 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.822591 kubelet[2853]: W0114 23:44:19.822551 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.822591 kubelet[2853]: E0114 23:44:19.822574 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.823490 kubelet[2853]: E0114 23:44:19.823423 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.823490 kubelet[2853]: W0114 23:44:19.823441 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.823490 kubelet[2853]: E0114 23:44:19.823457 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.861386 kubelet[2853]: E0114 23:44:19.861049 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:19.872844 kubelet[2853]: E0114 23:44:19.872778 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.872844 kubelet[2853]: W0114 23:44:19.872804 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.874104 kubelet[2853]: E0114 23:44:19.872825 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.874457 kubelet[2853]: E0114 23:44:19.874433 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.874541 kubelet[2853]: W0114 23:44:19.874454 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.874541 kubelet[2853]: E0114 23:44:19.874506 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.874696 kubelet[2853]: E0114 23:44:19.874681 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.874696 kubelet[2853]: W0114 23:44:19.874693 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.874843 kubelet[2853]: E0114 23:44:19.874703 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.874938 kubelet[2853]: E0114 23:44:19.874922 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.874938 kubelet[2853]: W0114 23:44:19.874935 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875026 kubelet[2853]: E0114 23:44:19.874945 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.875135 kubelet[2853]: E0114 23:44:19.875104 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.875135 kubelet[2853]: W0114 23:44:19.875117 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875135 kubelet[2853]: E0114 23:44:19.875127 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.875342 kubelet[2853]: E0114 23:44:19.875328 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.875342 kubelet[2853]: W0114 23:44:19.875341 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875448 kubelet[2853]: E0114 23:44:19.875350 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.875549 kubelet[2853]: E0114 23:44:19.875530 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.875549 kubelet[2853]: W0114 23:44:19.875544 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875549 kubelet[2853]: E0114 23:44:19.875554 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.875712 kubelet[2853]: E0114 23:44:19.875691 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.875712 kubelet[2853]: W0114 23:44:19.875699 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875712 kubelet[2853]: E0114 23:44:19.875706 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.875840 kubelet[2853]: E0114 23:44:19.875833 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.875840 kubelet[2853]: W0114 23:44:19.875840 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.875840 kubelet[2853]: E0114 23:44:19.875848 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876049 kubelet[2853]: E0114 23:44:19.876034 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876049 kubelet[2853]: W0114 23:44:19.876046 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.876225 kubelet[2853]: E0114 23:44:19.876058 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876325 kubelet[2853]: E0114 23:44:19.876308 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876325 kubelet[2853]: W0114 23:44:19.876323 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.876417 kubelet[2853]: E0114 23:44:19.876333 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876514 kubelet[2853]: E0114 23:44:19.876501 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876514 kubelet[2853]: W0114 23:44:19.876511 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.876601 kubelet[2853]: E0114 23:44:19.876520 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876672 kubelet[2853]: E0114 23:44:19.876661 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876672 kubelet[2853]: W0114 23:44:19.876670 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.876742 kubelet[2853]: E0114 23:44:19.876678 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876795 kubelet[2853]: E0114 23:44:19.876784 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876795 kubelet[2853]: W0114 23:44:19.876793 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.876876 kubelet[2853]: E0114 23:44:19.876800 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.876944 kubelet[2853]: E0114 23:44:19.876931 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.876944 kubelet[2853]: W0114 23:44:19.876941 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.876948 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.877052 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.877437 kubelet[2853]: W0114 23:44:19.877058 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.877068 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.877225 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.877437 kubelet[2853]: W0114 23:44:19.877232 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.877239 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.877437 kubelet[2853]: E0114 23:44:19.877387 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.877437 kubelet[2853]: W0114 23:44:19.877405 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.877817 kubelet[2853]: E0114 23:44:19.877448 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.877817 kubelet[2853]: E0114 23:44:19.877574 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.877817 kubelet[2853]: W0114 23:44:19.877581 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.877817 kubelet[2853]: E0114 23:44:19.877588 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.877956 kubelet[2853]: E0114 23:44:19.877945 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.877956 kubelet[2853]: W0114 23:44:19.877955 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.878016 kubelet[2853]: E0114 23:44:19.877964 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.899627 kubelet[2853]: E0114 23:44:19.899587 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.899960 kubelet[2853]: W0114 23:44:19.899811 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.899960 kubelet[2853]: E0114 23:44:19.899847 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.899960 kubelet[2853]: I0114 23:44:19.899911 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/35e31491-f658-475f-aa1a-411d37af2884-kubelet-dir\") pod \"csi-node-driver-7s4g4\" (UID: \"35e31491-f658-475f-aa1a-411d37af2884\") " pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:19.900609 kubelet[2853]: E0114 23:44:19.900585 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.900779 kubelet[2853]: W0114 23:44:19.900723 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.900779 kubelet[2853]: E0114 23:44:19.900752 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.900921 kubelet[2853]: I0114 23:44:19.900903 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/35e31491-f658-475f-aa1a-411d37af2884-registration-dir\") pod \"csi-node-driver-7s4g4\" (UID: \"35e31491-f658-475f-aa1a-411d37af2884\") " pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:19.901378 kubelet[2853]: E0114 23:44:19.901349 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.901550 kubelet[2853]: W0114 23:44:19.901509 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.901550 kubelet[2853]: E0114 23:44:19.901535 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.901969 kubelet[2853]: E0114 23:44:19.901953 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.902084 kubelet[2853]: W0114 23:44:19.902049 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.902084 kubelet[2853]: E0114 23:44:19.902069 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.902546 kubelet[2853]: E0114 23:44:19.902529 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.902718 kubelet[2853]: W0114 23:44:19.902645 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.902718 kubelet[2853]: E0114 23:44:19.902670 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.903087 kubelet[2853]: E0114 23:44:19.903033 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.903087 kubelet[2853]: W0114 23:44:19.903056 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.903087 kubelet[2853]: E0114 23:44:19.903070 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.903658 kubelet[2853]: E0114 23:44:19.903586 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.903658 kubelet[2853]: W0114 23:44:19.903619 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.903658 kubelet[2853]: E0114 23:44:19.903634 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.904030 kubelet[2853]: I0114 23:44:19.903871 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/35e31491-f658-475f-aa1a-411d37af2884-socket-dir\") pod \"csi-node-driver-7s4g4\" (UID: \"35e31491-f658-475f-aa1a-411d37af2884\") " pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:19.904273 kubelet[2853]: E0114 23:44:19.904208 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.904273 kubelet[2853]: W0114 23:44:19.904227 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.904273 kubelet[2853]: E0114 23:44:19.904241 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.904506 kubelet[2853]: I0114 23:44:19.904461 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/35e31491-f658-475f-aa1a-411d37af2884-varrun\") pod \"csi-node-driver-7s4g4\" (UID: \"35e31491-f658-475f-aa1a-411d37af2884\") " pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:19.904659 kubelet[2853]: E0114 23:44:19.904626 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.904775 kubelet[2853]: W0114 23:44:19.904658 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.904775 kubelet[2853]: E0114 23:44:19.904685 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.904932 kubelet[2853]: E0114 23:44:19.904916 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.904932 kubelet[2853]: W0114 23:44:19.904933 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.905072 kubelet[2853]: E0114 23:44:19.904946 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.905225 kubelet[2853]: E0114 23:44:19.905209 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.905275 kubelet[2853]: W0114 23:44:19.905227 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.905275 kubelet[2853]: E0114 23:44:19.905243 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.905275 kubelet[2853]: I0114 23:44:19.905277 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlswp\" (UniqueName: \"kubernetes.io/projected/35e31491-f658-475f-aa1a-411d37af2884-kube-api-access-vlswp\") pod \"csi-node-driver-7s4g4\" (UID: \"35e31491-f658-475f-aa1a-411d37af2884\") " pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:19.905737 kubelet[2853]: E0114 23:44:19.905720 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.905838 kubelet[2853]: W0114 23:44:19.905822 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.905977 kubelet[2853]: E0114 23:44:19.905911 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.906314 kubelet[2853]: E0114 23:44:19.906286 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.906492 kubelet[2853]: W0114 23:44:19.906450 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.906492 kubelet[2853]: E0114 23:44:19.906476 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.907053 kubelet[2853]: E0114 23:44:19.906955 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.907053 kubelet[2853]: W0114 23:44:19.906987 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.907053 kubelet[2853]: E0114 23:44:19.907003 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.907466 kubelet[2853]: E0114 23:44:19.907420 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:19.907466 kubelet[2853]: W0114 23:44:19.907438 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:19.907605 kubelet[2853]: E0114 23:44:19.907453 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:19.958608 containerd[1618]: time="2026-01-14T23:44:19.958336832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ndds,Uid:636768bc-0f1a-4fcf-bb35-51ee62af7700,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:20.001528 containerd[1618]: time="2026-01-14T23:44:20.000990143Z" level=info msg="connecting to shim 9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987" address="unix:///run/containerd/s/0f15a453895f99770481fffd81536da61dbb26e3f8bf83eb8325eeec11deb728" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:20.006319 kubelet[2853]: E0114 23:44:20.006263 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.006319 kubelet[2853]: W0114 23:44:20.006288 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.006717 kubelet[2853]: E0114 23:44:20.006433 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.007520 kubelet[2853]: E0114 23:44:20.007478 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.007520 kubelet[2853]: W0114 23:44:20.007509 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.007673 kubelet[2853]: E0114 23:44:20.007538 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.008588 kubelet[2853]: E0114 23:44:20.008538 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.008779 kubelet[2853]: W0114 23:44:20.008565 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.008779 kubelet[2853]: E0114 23:44:20.008642 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.009853 kubelet[2853]: E0114 23:44:20.009540 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.009853 kubelet[2853]: W0114 23:44:20.009572 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.009853 kubelet[2853]: E0114 23:44:20.009594 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.009853 kubelet[2853]: E0114 23:44:20.009847 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.009853 kubelet[2853]: W0114 23:44:20.009857 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.009853 kubelet[2853]: E0114 23:44:20.009866 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.011194 kubelet[2853]: E0114 23:44:20.010901 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.011194 kubelet[2853]: W0114 23:44:20.010925 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.011194 kubelet[2853]: E0114 23:44:20.010946 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.011194 kubelet[2853]: E0114 23:44:20.011166 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.011194 kubelet[2853]: W0114 23:44:20.011175 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.011194 kubelet[2853]: E0114 23:44:20.011185 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011346 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.012022 kubelet[2853]: W0114 23:44:20.011354 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011362 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011500 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.012022 kubelet[2853]: W0114 23:44:20.011511 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011520 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011933 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.012022 kubelet[2853]: W0114 23:44:20.011945 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.012022 kubelet[2853]: E0114 23:44:20.011956 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.013705 kubelet[2853]: E0114 23:44:20.013665 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.013705 kubelet[2853]: W0114 23:44:20.013693 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.013705 kubelet[2853]: E0114 23:44:20.013709 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.013923 kubelet[2853]: E0114 23:44:20.013884 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.013923 kubelet[2853]: W0114 23:44:20.013896 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.013923 kubelet[2853]: E0114 23:44:20.013904 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.014025 kubelet[2853]: E0114 23:44:20.014014 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.014025 kubelet[2853]: W0114 23:44:20.014021 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.014089 kubelet[2853]: E0114 23:44:20.014028 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.014335 kubelet[2853]: E0114 23:44:20.014278 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.014335 kubelet[2853]: W0114 23:44:20.014296 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.014335 kubelet[2853]: E0114 23:44:20.014312 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.015608 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016433 kubelet[2853]: W0114 23:44:20.015633 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.015649 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.015869 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016433 kubelet[2853]: W0114 23:44:20.015876 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.015885 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.016012 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016433 kubelet[2853]: W0114 23:44:20.016017 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.016024 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016433 kubelet[2853]: E0114 23:44:20.016143 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016731 kubelet[2853]: W0114 23:44:20.016151 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016731 kubelet[2853]: E0114 23:44:20.016158 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016731 kubelet[2853]: E0114 23:44:20.016262 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016731 kubelet[2853]: W0114 23:44:20.016269 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016731 kubelet[2853]: E0114 23:44:20.016277 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.016881 kubelet[2853]: E0114 23:44:20.016850 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.016915 kubelet[2853]: W0114 23:44:20.016880 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.016915 kubelet[2853]: E0114 23:44:20.016894 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.019578 kubelet[2853]: E0114 23:44:20.019546 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.019578 kubelet[2853]: W0114 23:44:20.019570 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.019802 kubelet[2853]: E0114 23:44:20.019590 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.020676 kubelet[2853]: E0114 23:44:20.020596 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.020676 kubelet[2853]: W0114 23:44:20.020617 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.020676 kubelet[2853]: E0114 23:44:20.020636 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.022509 kubelet[2853]: E0114 23:44:20.022451 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.022509 kubelet[2853]: W0114 23:44:20.022480 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.022509 kubelet[2853]: E0114 23:44:20.022498 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.022509 kubelet[2853]: E0114 23:44:20.022734 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.022509 kubelet[2853]: W0114 23:44:20.022744 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.022509 kubelet[2853]: E0114 23:44:20.022753 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.023783 kubelet[2853]: E0114 23:44:20.023035 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.023783 kubelet[2853]: W0114 23:44:20.023044 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.023783 kubelet[2853]: E0114 23:44:20.023055 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.042892 kubelet[2853]: E0114 23:44:20.042844 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.042892 kubelet[2853]: W0114 23:44:20.042875 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.042892 kubelet[2853]: E0114 23:44:20.042897 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.060212 systemd[1]: Started cri-containerd-9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987.scope - libcontainer container 9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987. Jan 14 23:44:20.074000 audit: BPF prog-id=151 op=LOAD Jan 14 23:44:20.075000 audit: BPF prog-id=152 op=LOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=152 op=UNLOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=153 op=LOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=154 op=LOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=154 op=UNLOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=153 op=UNLOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.075000 audit: BPF prog-id=155 op=LOAD Jan 14 23:44:20.075000 audit[3352]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3320 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.075000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934323165646233663133363661623831653135393234343437653530 Jan 14 23:44:20.102475 containerd[1618]: time="2026-01-14T23:44:20.102181666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9ndds,Uid:636768bc-0f1a-4fcf-bb35-51ee62af7700,Namespace:calico-system,Attempt:0,} returns sandbox id \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\"" Jan 14 23:44:20.105601 containerd[1618]: time="2026-01-14T23:44:20.105520413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 23:44:20.456000 audit[3385]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:20.456000 audit[3385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffebc530b0 a2=0 a3=1 items=0 ppid=2990 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:20.458000 audit[3385]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:20.458000 audit[3385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffebc530b0 a2=0 a3=1 items=0 ppid=2990 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:20.682849 kubelet[2853]: E0114 23:44:20.682779 2853 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:20.682849 kubelet[2853]: W0114 23:44:20.682825 2853 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:20.682849 kubelet[2853]: E0114 23:44:20.682849 2853 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:20.688954 containerd[1618]: time="2026-01-14T23:44:20.688866388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86448cd66-mtf2t,Uid:6513792b-a7c8-4e59-9a6c-7805d0acef34,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:20.727532 containerd[1618]: time="2026-01-14T23:44:20.726678897Z" level=info msg="connecting to shim 55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a" address="unix:///run/containerd/s/b3697db5a028da4b58d66228b3b3763967e72d8f7fec47b36907a042b689ac54" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:20.767841 systemd[1]: Started cri-containerd-55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a.scope - libcontainer container 55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a. Jan 14 23:44:20.782000 audit: BPF prog-id=156 op=LOAD Jan 14 23:44:20.783493 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 23:44:20.783592 kernel: audit: type=1334 audit(1768434260.782:549): prog-id=156 op=LOAD Jan 14 23:44:20.782000 audit: BPF prog-id=157 op=LOAD Jan 14 23:44:20.789786 kernel: audit: type=1334 audit(1768434260.782:550): prog-id=157 op=LOAD Jan 14 23:44:20.789995 kernel: audit: type=1300 audit(1768434260.782:550): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.790018 kernel: audit: type=1327 audit(1768434260.782:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.790033 kernel: audit: type=1334 audit(1768434260.782:551): prog-id=157 op=UNLOAD Jan 14 23:44:20.782000 audit[3408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.782000 audit: BPF prog-id=157 op=UNLOAD Jan 14 23:44:20.791102 kernel: audit: type=1300 audit(1768434260.782:551): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.782000 audit[3408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.793352 kernel: audit: type=1327 audit(1768434260.782:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.796455 kernel: audit: type=1334 audit(1768434260.782:552): prog-id=158 op=LOAD Jan 14 23:44:20.796724 kernel: audit: type=1300 audit(1768434260.782:552): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.782000 audit: BPF prog-id=158 op=LOAD Jan 14 23:44:20.782000 audit[3408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.782000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.800815 kernel: audit: type=1327 audit(1768434260.782:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.784000 audit: BPF prog-id=159 op=LOAD Jan 14 23:44:20.784000 audit[3408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.784000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.790000 audit: BPF prog-id=159 op=UNLOAD Jan 14 23:44:20.790000 audit[3408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.790000 audit: BPF prog-id=158 op=UNLOAD Jan 14 23:44:20.790000 audit[3408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.790000 audit: BPF prog-id=160 op=LOAD Jan 14 23:44:20.790000 audit[3408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3397 pid=3408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:20.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3535633461636161656334353464636566313164643938616665333462 Jan 14 23:44:20.831462 containerd[1618]: time="2026-01-14T23:44:20.831018357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-86448cd66-mtf2t,Uid:6513792b-a7c8-4e59-9a6c-7805d0acef34,Namespace:calico-system,Attempt:0,} returns sandbox id \"55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a\"" Jan 14 23:44:21.117824 kubelet[2853]: E0114 23:44:21.116937 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:21.649533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1815516864.mount: Deactivated successfully. Jan 14 23:44:21.784275 containerd[1618]: time="2026-01-14T23:44:21.784229649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:21.785603 containerd[1618]: time="2026-01-14T23:44:21.785537159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:21.786095 containerd[1618]: time="2026-01-14T23:44:21.786040026Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:21.793262 containerd[1618]: time="2026-01-14T23:44:21.792603819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:21.793262 containerd[1618]: time="2026-01-14T23:44:21.792995840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.687432025s" Jan 14 23:44:21.793262 containerd[1618]: time="2026-01-14T23:44:21.793019641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 23:44:21.796838 containerd[1618]: time="2026-01-14T23:44:21.796437585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 23:44:21.803005 containerd[1618]: time="2026-01-14T23:44:21.802958975Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 23:44:21.814907 containerd[1618]: time="2026-01-14T23:44:21.814854575Z" level=info msg="Container b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:21.828516 containerd[1618]: time="2026-01-14T23:44:21.828439665Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba\"" Jan 14 23:44:21.829656 containerd[1618]: time="2026-01-14T23:44:21.829624289Z" level=info msg="StartContainer for \"b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba\"" Jan 14 23:44:21.833010 containerd[1618]: time="2026-01-14T23:44:21.832973028Z" level=info msg="connecting to shim b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba" address="unix:///run/containerd/s/0f15a453895f99770481fffd81536da61dbb26e3f8bf83eb8325eeec11deb728" protocol=ttrpc version=3 Jan 14 23:44:21.860712 systemd[1]: Started cri-containerd-b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba.scope - libcontainer container b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba. Jan 14 23:44:21.916000 audit: BPF prog-id=161 op=LOAD Jan 14 23:44:21.916000 audit[3444]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3320 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646537373534643664623837343061316165386533316565666336 Jan 14 23:44:21.916000 audit: BPF prog-id=162 op=LOAD Jan 14 23:44:21.916000 audit[3444]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3320 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646537373534643664623837343061316165386533316565666336 Jan 14 23:44:21.916000 audit: BPF prog-id=162 op=UNLOAD Jan 14 23:44:21.916000 audit[3444]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646537373534643664623837343061316165386533316565666336 Jan 14 23:44:21.916000 audit: BPF prog-id=161 op=UNLOAD Jan 14 23:44:21.916000 audit[3444]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646537373534643664623837343061316165386533316565666336 Jan 14 23:44:21.916000 audit: BPF prog-id=163 op=LOAD Jan 14 23:44:21.916000 audit[3444]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3320 pid=3444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.916000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237646537373534643664623837343061316165386533316565666336 Jan 14 23:44:21.940217 containerd[1618]: time="2026-01-14T23:44:21.940078624Z" level=info msg="StartContainer for \"b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba\" returns successfully" Jan 14 23:44:21.959035 systemd[1]: cri-containerd-b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba.scope: Deactivated successfully. Jan 14 23:44:21.961000 audit: BPF prog-id=163 op=UNLOAD Jan 14 23:44:21.964554 containerd[1618]: time="2026-01-14T23:44:21.964500257Z" level=info msg="received container exit event container_id:\"b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba\" id:\"b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba\" pid:3455 exited_at:{seconds:1768434261 nanos:963977789}" Jan 14 23:44:21.988979 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7de7754d6db8740a1ae8e31eefc6c1a9a15e694db2888154f5aad1f6a9093ba-rootfs.mount: Deactivated successfully. Jan 14 23:44:23.117101 kubelet[2853]: E0114 23:44:23.116465 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:24.295184 containerd[1618]: time="2026-01-14T23:44:24.295086766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:24.297425 containerd[1618]: time="2026-01-14T23:44:24.297181867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 23:44:24.299141 containerd[1618]: time="2026-01-14T23:44:24.298850388Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:24.301687 containerd[1618]: time="2026-01-14T23:44:24.301644603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:24.302470 containerd[1618]: time="2026-01-14T23:44:24.302376678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.505893851s" Jan 14 23:44:24.302624 containerd[1618]: time="2026-01-14T23:44:24.302604729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 23:44:24.305669 containerd[1618]: time="2026-01-14T23:44:24.305624235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 23:44:24.323498 containerd[1618]: time="2026-01-14T23:44:24.323419936Z" level=info msg="CreateContainer within sandbox \"55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 23:44:24.336734 containerd[1618]: time="2026-01-14T23:44:24.336685578Z" level=info msg="Container edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:24.347657 containerd[1618]: time="2026-01-14T23:44:24.347533662Z" level=info msg="CreateContainer within sandbox \"55c4acaaec454dcef11dd98afe34b542f9abc1bed9322ca24ca7f2966a24dc3a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f\"" Jan 14 23:44:24.351794 containerd[1618]: time="2026-01-14T23:44:24.351728665Z" level=info msg="StartContainer for \"edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f\"" Jan 14 23:44:24.353523 containerd[1618]: time="2026-01-14T23:44:24.353444028Z" level=info msg="connecting to shim edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f" address="unix:///run/containerd/s/b3697db5a028da4b58d66228b3b3763967e72d8f7fec47b36907a042b689ac54" protocol=ttrpc version=3 Jan 14 23:44:24.377695 systemd[1]: Started cri-containerd-edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f.scope - libcontainer container edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f. Jan 14 23:44:24.394000 audit: BPF prog-id=164 op=LOAD Jan 14 23:44:24.395000 audit: BPF prog-id=165 op=LOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=165 op=UNLOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=166 op=LOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=167 op=LOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=167 op=UNLOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=166 op=UNLOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.395000 audit: BPF prog-id=168 op=LOAD Jan 14 23:44:24.395000 audit[3501]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3397 pid=3501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:24.395000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564633036616264356639656165613462313161376465613464313338 Jan 14 23:44:24.437191 containerd[1618]: time="2026-01-14T23:44:24.437110075Z" level=info msg="StartContainer for \"edc06abd5f9eaea4b11a7dea4d1389205ba46d3bae724c993a15e5f1d9aa701f\" returns successfully" Jan 14 23:44:25.116156 kubelet[2853]: E0114 23:44:25.116023 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:26.324340 kubelet[2853]: I0114 23:44:26.324236 2853 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 23:44:26.545913 kubelet[2853]: I0114 23:44:26.545833 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-86448cd66-mtf2t" podStartSLOduration=4.075025924 podStartE2EDuration="7.545814845s" podCreationTimestamp="2026-01-14 23:44:19 +0000 UTC" firstStartedPulling="2026-01-14 23:44:20.832768614 +0000 UTC m=+27.830296721" lastFinishedPulling="2026-01-14 23:44:24.303557535 +0000 UTC m=+31.301085642" observedRunningTime="2026-01-14 23:44:25.340200785 +0000 UTC m=+32.337728932" watchObservedRunningTime="2026-01-14 23:44:26.545814845 +0000 UTC m=+33.543342952" Jan 14 23:44:26.571000 audit[3536]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.573977 kernel: kauditd_printk_skb: 50 callbacks suppressed Jan 14 23:44:26.574068 kernel: audit: type=1325 audit(1768434266.571:571): table=filter:117 family=2 entries=21 op=nft_register_rule pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.571000 audit[3536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffbe40820 a2=0 a3=1 items=0 ppid=2990 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.578543 kernel: audit: type=1300 audit(1768434266.571:571): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffbe40820 a2=0 a3=1 items=0 ppid=2990 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:26.579901 kernel: audit: type=1327 audit(1768434266.571:571): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:26.579000 audit[3536]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.581963 kernel: audit: type=1325 audit(1768434266.579:572): table=nat:118 family=2 entries=19 op=nft_register_chain pid=3536 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.579000 audit[3536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffbe40820 a2=0 a3=1 items=0 ppid=2990 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.584636 kernel: audit: type=1300 audit(1768434266.579:572): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffbe40820 a2=0 a3=1 items=0 ppid=2990 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.579000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:26.586118 kernel: audit: type=1327 audit(1768434266.579:572): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:27.116295 kubelet[2853]: E0114 23:44:27.116118 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:27.594030 containerd[1618]: time="2026-01-14T23:44:27.593902854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:27.595564 containerd[1618]: time="2026-01-14T23:44:27.595272674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 23:44:27.597353 containerd[1618]: time="2026-01-14T23:44:27.596868384Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:27.599842 containerd[1618]: time="2026-01-14T23:44:27.599774032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:27.600916 containerd[1618]: time="2026-01-14T23:44:27.600871280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.295199282s" Jan 14 23:44:27.601057 containerd[1618]: time="2026-01-14T23:44:27.601040047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 23:44:27.610513 containerd[1618]: time="2026-01-14T23:44:27.609371653Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 23:44:27.622565 containerd[1618]: time="2026-01-14T23:44:27.622495910Z" level=info msg="Container 2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:27.630533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258820455.mount: Deactivated successfully. Jan 14 23:44:27.636713 containerd[1618]: time="2026-01-14T23:44:27.636598850Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346\"" Jan 14 23:44:27.638516 containerd[1618]: time="2026-01-14T23:44:27.638460772Z" level=info msg="StartContainer for \"2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346\"" Jan 14 23:44:27.641603 containerd[1618]: time="2026-01-14T23:44:27.641513546Z" level=info msg="connecting to shim 2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346" address="unix:///run/containerd/s/0f15a453895f99770481fffd81536da61dbb26e3f8bf83eb8325eeec11deb728" protocol=ttrpc version=3 Jan 14 23:44:27.675793 systemd[1]: Started cri-containerd-2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346.scope - libcontainer container 2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346. Jan 14 23:44:27.748000 audit: BPF prog-id=169 op=LOAD Jan 14 23:44:27.748000 audit[3545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.754658 kernel: audit: type=1334 audit(1768434267.748:573): prog-id=169 op=LOAD Jan 14 23:44:27.754932 kernel: audit: type=1300 audit(1768434267.748:573): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.754954 kernel: audit: type=1327 audit(1768434267.748:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.758439 kernel: audit: type=1334 audit(1768434267.749:574): prog-id=170 op=LOAD Jan 14 23:44:27.749000 audit: BPF prog-id=170 op=LOAD Jan 14 23:44:27.749000 audit[3545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.749000 audit: BPF prog-id=170 op=UNLOAD Jan 14 23:44:27.749000 audit[3545]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.749000 audit: BPF prog-id=169 op=UNLOAD Jan 14 23:44:27.749000 audit[3545]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.749000 audit: BPF prog-id=171 op=LOAD Jan 14 23:44:27.749000 audit[3545]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=3320 pid=3545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:27.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261393135616162643438353365616236643635383964613834643836 Jan 14 23:44:27.789133 containerd[1618]: time="2026-01-14T23:44:27.789062830Z" level=info msg="StartContainer for \"2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346\" returns successfully" Jan 14 23:44:28.315800 containerd[1618]: time="2026-01-14T23:44:28.315736328Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:44:28.319950 systemd[1]: cri-containerd-2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346.scope: Deactivated successfully. Jan 14 23:44:28.320652 systemd[1]: cri-containerd-2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346.scope: Consumed 539ms CPU time, 186.9M memory peak, 165.9M written to disk. Jan 14 23:44:28.325433 containerd[1618]: time="2026-01-14T23:44:28.325169851Z" level=info msg="received container exit event container_id:\"2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346\" id:\"2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346\" pid:3559 exited_at:{seconds:1768434268 nanos:324794715}" Jan 14 23:44:28.324000 audit: BPF prog-id=171 op=UNLOAD Jan 14 23:44:28.359502 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a915aabd4853eab6d6589da84d8634eb89bb30582b82aee0943f415e9fe2346-rootfs.mount: Deactivated successfully. Jan 14 23:44:28.375132 kubelet[2853]: I0114 23:44:28.375099 2853 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 14 23:44:28.535592 systemd[1]: Created slice kubepods-burstable-poda5515c41_dbb8_4ede_a0b7_2e7883a18a57.slice - libcontainer container kubepods-burstable-poda5515c41_dbb8_4ede_a0b7_2e7883a18a57.slice. Jan 14 23:44:28.552292 systemd[1]: Created slice kubepods-besteffort-pod1b0b15bb_ef3d_4cc7_a85f_a12ae2d1363a.slice - libcontainer container kubepods-besteffort-pod1b0b15bb_ef3d_4cc7_a85f_a12ae2d1363a.slice. Jan 14 23:44:28.573220 systemd[1]: Created slice kubepods-besteffort-pod7685ab64_5fe9_46e5_bd88_429276e37200.slice - libcontainer container kubepods-besteffort-pod7685ab64_5fe9_46e5_bd88_429276e37200.slice. Jan 14 23:44:28.579781 kubelet[2853]: I0114 23:44:28.579574 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4htmd\" (UniqueName: \"kubernetes.io/projected/0595d5fe-8d8f-4e95-8e85-0c22f59bd781-kube-api-access-4htmd\") pod \"calico-kube-controllers-7bfc4b59c4-h9rl4\" (UID: \"0595d5fe-8d8f-4e95-8e85-0c22f59bd781\") " pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" Jan 14 23:44:28.579781 kubelet[2853]: I0114 23:44:28.579633 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a-calico-apiserver-certs\") pod \"calico-apiserver-84c479b4c5-8sg5s\" (UID: \"1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a\") " pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" Jan 14 23:44:28.579781 kubelet[2853]: I0114 23:44:28.579654 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5515c41-dbb8-4ede-a0b7-2e7883a18a57-config-volume\") pod \"coredns-66bc5c9577-k7q5h\" (UID: \"a5515c41-dbb8-4ede-a0b7-2e7883a18a57\") " pod="kube-system/coredns-66bc5c9577-k7q5h" Jan 14 23:44:28.579781 kubelet[2853]: I0114 23:44:28.579676 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/375f636a-b14c-4107-87ee-0c0815e9a9c0-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mbwp8\" (UID: \"375f636a-b14c-4107-87ee-0c0815e9a9c0\") " pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:28.579781 kubelet[2853]: I0114 23:44:28.579697 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n6hj\" (UniqueName: \"kubernetes.io/projected/1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a-kube-api-access-2n6hj\") pod \"calico-apiserver-84c479b4c5-8sg5s\" (UID: \"1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a\") " pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" Jan 14 23:44:28.579633 systemd[1]: Created slice kubepods-besteffort-pod375f636a_b14c_4107_87ee_0c0815e9a9c0.slice - libcontainer container kubepods-besteffort-pod375f636a_b14c_4107_87ee_0c0815e9a9c0.slice. Jan 14 23:44:28.580087 kubelet[2853]: I0114 23:44:28.579714 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t768h\" (UniqueName: \"kubernetes.io/projected/7685ab64-5fe9-46e5-bd88-429276e37200-kube-api-access-t768h\") pod \"whisker-8596d6f85d-p8jxc\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " pod="calico-system/whisker-8596d6f85d-p8jxc" Jan 14 23:44:28.580087 kubelet[2853]: I0114 23:44:28.579732 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp5f6\" (UniqueName: \"kubernetes.io/projected/375f636a-b14c-4107-87ee-0c0815e9a9c0-kube-api-access-jp5f6\") pod \"goldmane-7c778bb748-mbwp8\" (UID: \"375f636a-b14c-4107-87ee-0c0815e9a9c0\") " pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:28.580087 kubelet[2853]: I0114 23:44:28.579754 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-backend-key-pair\") pod \"whisker-8596d6f85d-p8jxc\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " pod="calico-system/whisker-8596d6f85d-p8jxc" Jan 14 23:44:28.580087 kubelet[2853]: I0114 23:44:28.579772 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c662993-1a64-424b-835f-c3688665f281-calico-apiserver-certs\") pod \"calico-apiserver-84c479b4c5-75mhg\" (UID: \"7c662993-1a64-424b-835f-c3688665f281\") " pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" Jan 14 23:44:28.580087 kubelet[2853]: I0114 23:44:28.579789 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgzt\" (UniqueName: \"kubernetes.io/projected/a5515c41-dbb8-4ede-a0b7-2e7883a18a57-kube-api-access-5wgzt\") pod \"coredns-66bc5c9577-k7q5h\" (UID: \"a5515c41-dbb8-4ede-a0b7-2e7883a18a57\") " pod="kube-system/coredns-66bc5c9577-k7q5h" Jan 14 23:44:28.580214 kubelet[2853]: I0114 23:44:28.579807 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cv5\" (UniqueName: \"kubernetes.io/projected/7c662993-1a64-424b-835f-c3688665f281-kube-api-access-52cv5\") pod \"calico-apiserver-84c479b4c5-75mhg\" (UID: \"7c662993-1a64-424b-835f-c3688665f281\") " pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" Jan 14 23:44:28.580214 kubelet[2853]: I0114 23:44:28.579836 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b935d12e-83f9-44f9-b2c0-7537aed4125a-config-volume\") pod \"coredns-66bc5c9577-lppt6\" (UID: \"b935d12e-83f9-44f9-b2c0-7537aed4125a\") " pod="kube-system/coredns-66bc5c9577-lppt6" Jan 14 23:44:28.580214 kubelet[2853]: I0114 23:44:28.579852 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhzjk\" (UniqueName: \"kubernetes.io/projected/b935d12e-83f9-44f9-b2c0-7537aed4125a-kube-api-access-dhzjk\") pod \"coredns-66bc5c9577-lppt6\" (UID: \"b935d12e-83f9-44f9-b2c0-7537aed4125a\") " pod="kube-system/coredns-66bc5c9577-lppt6" Jan 14 23:44:28.580214 kubelet[2853]: I0114 23:44:28.579869 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-ca-bundle\") pod \"whisker-8596d6f85d-p8jxc\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " pod="calico-system/whisker-8596d6f85d-p8jxc" Jan 14 23:44:28.580214 kubelet[2853]: I0114 23:44:28.579885 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375f636a-b14c-4107-87ee-0c0815e9a9c0-config\") pod \"goldmane-7c778bb748-mbwp8\" (UID: \"375f636a-b14c-4107-87ee-0c0815e9a9c0\") " pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:28.580324 kubelet[2853]: I0114 23:44:28.579904 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0595d5fe-8d8f-4e95-8e85-0c22f59bd781-tigera-ca-bundle\") pod \"calico-kube-controllers-7bfc4b59c4-h9rl4\" (UID: \"0595d5fe-8d8f-4e95-8e85-0c22f59bd781\") " pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" Jan 14 23:44:28.580324 kubelet[2853]: I0114 23:44:28.579922 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/375f636a-b14c-4107-87ee-0c0815e9a9c0-goldmane-key-pair\") pod \"goldmane-7c778bb748-mbwp8\" (UID: \"375f636a-b14c-4107-87ee-0c0815e9a9c0\") " pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:28.591122 systemd[1]: Created slice kubepods-besteffort-pod7c662993_1a64_424b_835f_c3688665f281.slice - libcontainer container kubepods-besteffort-pod7c662993_1a64_424b_835f_c3688665f281.slice. Jan 14 23:44:28.599628 systemd[1]: Created slice kubepods-burstable-podb935d12e_83f9_44f9_b2c0_7537aed4125a.slice - libcontainer container kubepods-burstable-podb935d12e_83f9_44f9_b2c0_7537aed4125a.slice. Jan 14 23:44:28.606578 systemd[1]: Created slice kubepods-besteffort-pod0595d5fe_8d8f_4e95_8e85_0c22f59bd781.slice - libcontainer container kubepods-besteffort-pod0595d5fe_8d8f_4e95_8e85_0c22f59bd781.slice. Jan 14 23:44:28.854636 containerd[1618]: time="2026-01-14T23:44:28.854379222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k7q5h,Uid:a5515c41-dbb8-4ede-a0b7-2e7883a18a57,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:28.870837 containerd[1618]: time="2026-01-14T23:44:28.870608275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-8sg5s,Uid:1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:28.882243 containerd[1618]: time="2026-01-14T23:44:28.882199129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8596d6f85d-p8jxc,Uid:7685ab64-5fe9-46e5-bd88-429276e37200,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:28.889049 containerd[1618]: time="2026-01-14T23:44:28.889011020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mbwp8,Uid:375f636a-b14c-4107-87ee-0c0815e9a9c0,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:28.897838 containerd[1618]: time="2026-01-14T23:44:28.897797834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-75mhg,Uid:7c662993-1a64-424b-835f-c3688665f281,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:28.906045 containerd[1618]: time="2026-01-14T23:44:28.905994064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lppt6,Uid:b935d12e-83f9-44f9-b2c0-7537aed4125a,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:28.914297 containerd[1618]: time="2026-01-14T23:44:28.914257096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc4b59c4-h9rl4,Uid:0595d5fe-8d8f-4e95-8e85-0c22f59bd781,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:29.056369 containerd[1618]: time="2026-01-14T23:44:29.056303648Z" level=error msg="Failed to destroy network for sandbox \"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.064488 containerd[1618]: time="2026-01-14T23:44:29.064255298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8596d6f85d-p8jxc,Uid:7685ab64-5fe9-46e5-bd88-429276e37200,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.064658 containerd[1618]: time="2026-01-14T23:44:29.064541350Z" level=error msg="Failed to destroy network for sandbox \"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.065024 kubelet[2853]: E0114 23:44:29.064789 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.065024 kubelet[2853]: E0114 23:44:29.064964 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8596d6f85d-p8jxc" Jan 14 23:44:29.065024 kubelet[2853]: E0114 23:44:29.064991 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8596d6f85d-p8jxc" Jan 14 23:44:29.065385 kubelet[2853]: E0114 23:44:29.065249 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8596d6f85d-p8jxc_calico-system(7685ab64-5fe9-46e5-bd88-429276e37200)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8596d6f85d-p8jxc_calico-system(7685ab64-5fe9-46e5-bd88-429276e37200)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1018c658eaac2f01bbf3378dd6af3d3462cbeaa7777cc0ac258ac51106285bc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8596d6f85d-p8jxc" podUID="7685ab64-5fe9-46e5-bd88-429276e37200" Jan 14 23:44:29.070991 containerd[1618]: time="2026-01-14T23:44:29.070932294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k7q5h,Uid:a5515c41-dbb8-4ede-a0b7-2e7883a18a57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.072295 kubelet[2853]: E0114 23:44:29.071860 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.072295 kubelet[2853]: E0114 23:44:29.071946 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k7q5h" Jan 14 23:44:29.072295 kubelet[2853]: E0114 23:44:29.071967 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k7q5h" Jan 14 23:44:29.072524 kubelet[2853]: E0114 23:44:29.072034 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k7q5h_kube-system(a5515c41-dbb8-4ede-a0b7-2e7883a18a57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k7q5h_kube-system(a5515c41-dbb8-4ede-a0b7-2e7883a18a57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e1487d5a8b885129ba3452c72e4e8f9907114d665b7adbe0afc60b7a55cbfae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k7q5h" podUID="a5515c41-dbb8-4ede-a0b7-2e7883a18a57" Jan 14 23:44:29.097776 containerd[1618]: time="2026-01-14T23:44:29.097716964Z" level=error msg="Failed to destroy network for sandbox \"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.098570 containerd[1618]: time="2026-01-14T23:44:29.098454675Z" level=error msg="Failed to destroy network for sandbox \"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.101324 containerd[1618]: time="2026-01-14T23:44:29.101280032Z" level=error msg="Failed to destroy network for sandbox \"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.101700 containerd[1618]: time="2026-01-14T23:44:29.101293392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lppt6,Uid:b935d12e-83f9-44f9-b2c0-7537aed4125a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.102149 kubelet[2853]: E0114 23:44:29.102082 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.102149 kubelet[2853]: E0114 23:44:29.102147 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lppt6" Jan 14 23:44:29.102544 kubelet[2853]: E0114 23:44:29.102166 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lppt6" Jan 14 23:44:29.102544 kubelet[2853]: E0114 23:44:29.102219 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lppt6_kube-system(b935d12e-83f9-44f9-b2c0-7537aed4125a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lppt6_kube-system(b935d12e-83f9-44f9-b2c0-7537aed4125a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"553f6a34a9a4ca6156a62daa31a48965755feeab54f686fb9728e223a0386614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lppt6" podUID="b935d12e-83f9-44f9-b2c0-7537aed4125a" Jan 14 23:44:29.105201 containerd[1618]: time="2026-01-14T23:44:29.105034707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-8sg5s,Uid:1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.105676 kubelet[2853]: E0114 23:44:29.105615 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.106125 kubelet[2853]: E0114 23:44:29.105685 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" Jan 14 23:44:29.106125 kubelet[2853]: E0114 23:44:29.105707 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" Jan 14 23:44:29.106125 kubelet[2853]: E0114 23:44:29.105763 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12c1cdd2a78a7b89facb5f2e54ac33aea6c864c7375b489e0087ec161040ed49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:44:29.107634 containerd[1618]: time="2026-01-14T23:44:29.107513570Z" level=error msg="Failed to destroy network for sandbox \"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.109863 containerd[1618]: time="2026-01-14T23:44:29.109714101Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc4b59c4-h9rl4,Uid:0595d5fe-8d8f-4e95-8e85-0c22f59bd781,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.110237 kubelet[2853]: E0114 23:44:29.110191 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.110490 kubelet[2853]: E0114 23:44:29.110443 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" Jan 14 23:44:29.110591 kubelet[2853]: E0114 23:44:29.110475 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" Jan 14 23:44:29.110824 kubelet[2853]: E0114 23:44:29.110776 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df682c96eb31eb95ad12a528181d9ee284cab974e2d12dcb46f49b88d67fedfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:44:29.113403 containerd[1618]: time="2026-01-14T23:44:29.113095121Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mbwp8,Uid:375f636a-b14c-4107-87ee-0c0815e9a9c0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.114422 kubelet[2853]: E0114 23:44:29.114223 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.114422 kubelet[2853]: E0114 23:44:29.114274 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:29.114422 kubelet[2853]: E0114 23:44:29.114292 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mbwp8" Jan 14 23:44:29.114541 kubelet[2853]: E0114 23:44:29.114340 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0964f2be04ed98346468807bc5758685b9467403feddf23a1f7d7ee672c3bdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:44:29.127148 containerd[1618]: time="2026-01-14T23:44:29.127105062Z" level=error msg="Failed to destroy network for sandbox \"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.132314 containerd[1618]: time="2026-01-14T23:44:29.132257596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-75mhg,Uid:7c662993-1a64-424b-835f-c3688665f281,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.132833 systemd[1]: Created slice kubepods-besteffort-pod35e31491_f658_475f_aa1a_411d37af2884.slice - libcontainer container kubepods-besteffort-pod35e31491_f658_475f_aa1a_411d37af2884.slice. Jan 14 23:44:29.133948 kubelet[2853]: E0114 23:44:29.133904 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.134219 kubelet[2853]: E0114 23:44:29.134147 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" Jan 14 23:44:29.134219 kubelet[2853]: E0114 23:44:29.134173 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" Jan 14 23:44:29.134489 kubelet[2853]: E0114 23:44:29.134381 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f0f9f447866f2290a2f693eac75a94f93ad3042bd3c224202b918b09a0ff06e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:44:29.140511 containerd[1618]: time="2026-01-14T23:44:29.140323570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s4g4,Uid:35e31491-f658-475f-aa1a-411d37af2884,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:29.201948 containerd[1618]: time="2026-01-14T23:44:29.201883281Z" level=error msg="Failed to destroy network for sandbox \"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.204531 containerd[1618]: time="2026-01-14T23:44:29.204388825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s4g4,Uid:35e31491-f658-475f-aa1a-411d37af2884,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.206495 kubelet[2853]: E0114 23:44:29.204733 2853 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:29.206495 kubelet[2853]: E0114 23:44:29.204818 2853 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:29.206495 kubelet[2853]: E0114 23:44:29.204881 2853 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7s4g4" Jan 14 23:44:29.206759 kubelet[2853]: E0114 23:44:29.205289 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"029c7c333e2da1190eabdaa1211825a2de424fac7bab7c8e5fb0ede7a2c6c8cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:29.348041 containerd[1618]: time="2026-01-14T23:44:29.347112779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 23:44:36.011652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253984807.mount: Deactivated successfully. Jan 14 23:44:36.054998 containerd[1618]: time="2026-01-14T23:44:36.052732195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.054998 containerd[1618]: time="2026-01-14T23:44:36.054785426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 23:44:36.055811 containerd[1618]: time="2026-01-14T23:44:36.055770141Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.058410 containerd[1618]: time="2026-01-14T23:44:36.058331630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.059805 containerd[1618]: time="2026-01-14T23:44:36.059765840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.71142097s" Jan 14 23:44:36.059890 containerd[1618]: time="2026-01-14T23:44:36.059818681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 23:44:36.081586 containerd[1618]: time="2026-01-14T23:44:36.081446835Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 23:44:36.109602 containerd[1618]: time="2026-01-14T23:44:36.108622741Z" level=info msg="Container 1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:36.147611 containerd[1618]: time="2026-01-14T23:44:36.147551056Z" level=info msg="CreateContainer within sandbox \"9421edb3f1366ab81e15924447e50931c85983902c49be7efdf74bcd120ae987\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73\"" Jan 14 23:44:36.149576 containerd[1618]: time="2026-01-14T23:44:36.149525845Z" level=info msg="StartContainer for \"1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73\"" Jan 14 23:44:36.151610 containerd[1618]: time="2026-01-14T23:44:36.151577837Z" level=info msg="connecting to shim 1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73" address="unix:///run/containerd/s/0f15a453895f99770481fffd81536da61dbb26e3f8bf83eb8325eeec11deb728" protocol=ttrpc version=3 Jan 14 23:44:36.200696 systemd[1]: Started cri-containerd-1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73.scope - libcontainer container 1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73. Jan 14 23:44:36.256944 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:44:36.257151 kernel: audit: type=1334 audit(1768434276.255:579): prog-id=172 op=LOAD Jan 14 23:44:36.255000 audit: BPF prog-id=172 op=LOAD Jan 14 23:44:36.255000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.261091 kernel: audit: type=1300 audit(1768434276.255:579): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: BPF prog-id=173 op=LOAD Jan 14 23:44:36.265508 kernel: audit: type=1327 audit(1768434276.255:579): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.265603 kernel: audit: type=1334 audit(1768434276.255:580): prog-id=173 op=LOAD Jan 14 23:44:36.268576 kernel: audit: type=1300 audit(1768434276.255:580): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.255000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.272702 kernel: audit: type=1327 audit(1768434276.255:580): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: BPF prog-id=173 op=UNLOAD Jan 14 23:44:36.255000 audit[3810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.279436 kernel: audit: type=1334 audit(1768434276.255:581): prog-id=173 op=UNLOAD Jan 14 23:44:36.279549 kernel: audit: type=1300 audit(1768434276.255:581): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.279586 kernel: audit: type=1327 audit(1768434276.255:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: BPF prog-id=172 op=UNLOAD Jan 14 23:44:36.281554 kernel: audit: type=1334 audit(1768434276.255:582): prog-id=172 op=UNLOAD Jan 14 23:44:36.255000 audit[3810]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.255000 audit: BPF prog-id=174 op=LOAD Jan 14 23:44:36.255000 audit[3810]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3320 pid=3810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.255000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164633966373138316236626237646435313963333132363464343434 Jan 14 23:44:36.298358 containerd[1618]: time="2026-01-14T23:44:36.298286705Z" level=info msg="StartContainer for \"1dc9f7181b6bb7dd519c31264d44401d76be44d1ec969b037f32d06a00b5df73\" returns successfully" Jan 14 23:44:36.461650 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 23:44:36.461770 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 23:44:36.603593 kubelet[2853]: I0114 23:44:36.603519 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9ndds" podStartSLOduration=1.647488225 podStartE2EDuration="17.603451931s" podCreationTimestamp="2026-01-14 23:44:19 +0000 UTC" firstStartedPulling="2026-01-14 23:44:20.104586001 +0000 UTC m=+27.102114108" lastFinishedPulling="2026-01-14 23:44:36.060549707 +0000 UTC m=+43.058077814" observedRunningTime="2026-01-14 23:44:36.395883504 +0000 UTC m=+43.393411611" watchObservedRunningTime="2026-01-14 23:44:36.603451931 +0000 UTC m=+43.600979998" Jan 14 23:44:36.742310 kubelet[2853]: I0114 23:44:36.742128 2853 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t768h\" (UniqueName: \"kubernetes.io/projected/7685ab64-5fe9-46e5-bd88-429276e37200-kube-api-access-t768h\") pod \"7685ab64-5fe9-46e5-bd88-429276e37200\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " Jan 14 23:44:36.742748 kubelet[2853]: I0114 23:44:36.742699 2853 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-ca-bundle\") pod \"7685ab64-5fe9-46e5-bd88-429276e37200\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " Jan 14 23:44:36.742801 kubelet[2853]: I0114 23:44:36.742760 2853 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-backend-key-pair\") pod \"7685ab64-5fe9-46e5-bd88-429276e37200\" (UID: \"7685ab64-5fe9-46e5-bd88-429276e37200\") " Jan 14 23:44:36.746142 kubelet[2853]: I0114 23:44:36.746026 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "7685ab64-5fe9-46e5-bd88-429276e37200" (UID: "7685ab64-5fe9-46e5-bd88-429276e37200"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 23:44:36.751784 kubelet[2853]: I0114 23:44:36.751620 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "7685ab64-5fe9-46e5-bd88-429276e37200" (UID: "7685ab64-5fe9-46e5-bd88-429276e37200"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 23:44:36.755924 kubelet[2853]: I0114 23:44:36.755756 2853 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7685ab64-5fe9-46e5-bd88-429276e37200-kube-api-access-t768h" (OuterVolumeSpecName: "kube-api-access-t768h") pod "7685ab64-5fe9-46e5-bd88-429276e37200" (UID: "7685ab64-5fe9-46e5-bd88-429276e37200"). InnerVolumeSpecName "kube-api-access-t768h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 23:44:36.843703 kubelet[2853]: I0114 23:44:36.843646 2853 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t768h\" (UniqueName: \"kubernetes.io/projected/7685ab64-5fe9-46e5-bd88-429276e37200-kube-api-access-t768h\") on node \"ci-4515-1-0-n-ec6f9a8ce8\" DevicePath \"\"" Jan 14 23:44:36.843703 kubelet[2853]: I0114 23:44:36.843687 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-ca-bundle\") on node \"ci-4515-1-0-n-ec6f9a8ce8\" DevicePath \"\"" Jan 14 23:44:36.843703 kubelet[2853]: I0114 23:44:36.843713 2853 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7685ab64-5fe9-46e5-bd88-429276e37200-whisker-backend-key-pair\") on node \"ci-4515-1-0-n-ec6f9a8ce8\" DevicePath \"\"" Jan 14 23:44:37.013372 systemd[1]: var-lib-kubelet-pods-7685ab64\x2d5fe9\x2d46e5\x2dbd88\x2d429276e37200-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt768h.mount: Deactivated successfully. Jan 14 23:44:37.013572 systemd[1]: var-lib-kubelet-pods-7685ab64\x2d5fe9\x2d46e5\x2dbd88\x2d429276e37200-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 23:44:37.128292 systemd[1]: Removed slice kubepods-besteffort-pod7685ab64_5fe9_46e5_bd88_429276e37200.slice - libcontainer container kubepods-besteffort-pod7685ab64_5fe9_46e5_bd88_429276e37200.slice. Jan 14 23:44:37.487967 systemd[1]: Created slice kubepods-besteffort-pod37539e9d_a75a_4f02_a310_797e90f63b91.slice - libcontainer container kubepods-besteffort-pod37539e9d_a75a_4f02_a310_797e90f63b91.slice. Jan 14 23:44:37.550149 kubelet[2853]: I0114 23:44:37.550052 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37539e9d-a75a-4f02-a310-797e90f63b91-whisker-backend-key-pair\") pod \"whisker-74f68b674d-szjw2\" (UID: \"37539e9d-a75a-4f02-a310-797e90f63b91\") " pod="calico-system/whisker-74f68b674d-szjw2" Jan 14 23:44:37.550149 kubelet[2853]: I0114 23:44:37.550125 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924hn\" (UniqueName: \"kubernetes.io/projected/37539e9d-a75a-4f02-a310-797e90f63b91-kube-api-access-924hn\") pod \"whisker-74f68b674d-szjw2\" (UID: \"37539e9d-a75a-4f02-a310-797e90f63b91\") " pod="calico-system/whisker-74f68b674d-szjw2" Jan 14 23:44:37.550505 kubelet[2853]: I0114 23:44:37.550443 2853 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37539e9d-a75a-4f02-a310-797e90f63b91-whisker-ca-bundle\") pod \"whisker-74f68b674d-szjw2\" (UID: \"37539e9d-a75a-4f02-a310-797e90f63b91\") " pod="calico-system/whisker-74f68b674d-szjw2" Jan 14 23:44:37.796531 containerd[1618]: time="2026-01-14T23:44:37.796254770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f68b674d-szjw2,Uid:37539e9d-a75a-4f02-a310-797e90f63b91,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:38.056178 systemd-networkd[1503]: cali0e1b4b6cdea: Link UP Jan 14 23:44:38.056450 systemd-networkd[1503]: cali0e1b4b6cdea: Gained carrier Jan 14 23:44:38.080211 containerd[1618]: 2026-01-14 23:44:37.830 [INFO][3899] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 23:44:38.080211 containerd[1618]: 2026-01-14 23:44:37.893 [INFO][3899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0 whisker-74f68b674d- calico-system 37539e9d-a75a-4f02-a310-797e90f63b91 915 0 2026-01-14 23:44:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:74f68b674d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 whisker-74f68b674d-szjw2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0e1b4b6cdea [] [] }} ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-" Jan 14 23:44:38.080211 containerd[1618]: 2026-01-14 23:44:37.893 [INFO][3899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.080211 containerd[1618]: 2026-01-14 23:44:37.969 [INFO][3910] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" HandleID="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.970 [INFO][3910] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" HandleID="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003339c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"whisker-74f68b674d-szjw2", "timestamp":"2026-01-14 23:44:37.969801607 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.970 [INFO][3910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.970 [INFO][3910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.970 [INFO][3910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.987 [INFO][3910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:37.996 [INFO][3910] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:38.004 [INFO][3910] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:38.007 [INFO][3910] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.080763 containerd[1618]: 2026-01-14 23:44:38.012 [INFO][3910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.013 [INFO][3910] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.016 [INFO][3910] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338 Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.028 [INFO][3910] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.036 [INFO][3910] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.65/26] block=192.168.94.64/26 handle="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.036 [INFO][3910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.65/26] handle="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.036 [INFO][3910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:38.081447 containerd[1618]: 2026-01-14 23:44:38.036 [INFO][3910] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.65/26] IPv6=[] ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" HandleID="k8s-pod-network.b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.081800 containerd[1618]: 2026-01-14 23:44:38.040 [INFO][3899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0", GenerateName:"whisker-74f68b674d-", Namespace:"calico-system", SelfLink:"", UID:"37539e9d-a75a-4f02-a310-797e90f63b91", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f68b674d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"whisker-74f68b674d-szjw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e1b4b6cdea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:38.081800 containerd[1618]: 2026-01-14 23:44:38.041 [INFO][3899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.65/32] ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.082320 containerd[1618]: 2026-01-14 23:44:38.041 [INFO][3899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e1b4b6cdea ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.082320 containerd[1618]: 2026-01-14 23:44:38.056 [INFO][3899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.082370 containerd[1618]: 2026-01-14 23:44:38.057 [INFO][3899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0", GenerateName:"whisker-74f68b674d-", Namespace:"calico-system", SelfLink:"", UID:"37539e9d-a75a-4f02-a310-797e90f63b91", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"74f68b674d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338", Pod:"whisker-74f68b674d-szjw2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.94.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0e1b4b6cdea", MAC:"3a:de:00:de:39:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:38.082477 containerd[1618]: 2026-01-14 23:44:38.075 [INFO][3899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" Namespace="calico-system" Pod="whisker-74f68b674d-szjw2" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-whisker--74f68b674d--szjw2-eth0" Jan 14 23:44:38.180356 containerd[1618]: time="2026-01-14T23:44:38.180178938Z" level=info msg="connecting to shim b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338" address="unix:///run/containerd/s/d9c6bdabc6ef84c9968c67e8187970c31c1f24bffb3b5b2b927e4df2d375b96e" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:38.231910 systemd[1]: Started cri-containerd-b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338.scope - libcontainer container b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338. Jan 14 23:44:38.285000 audit: BPF prog-id=175 op=LOAD Jan 14 23:44:38.285000 audit: BPF prog-id=176 op=LOAD Jan 14 23:44:38.285000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.285000 audit: BPF prog-id=176 op=UNLOAD Jan 14 23:44:38.285000 audit[4028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.285000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.286000 audit: BPF prog-id=177 op=LOAD Jan 14 23:44:38.286000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.286000 audit: BPF prog-id=178 op=LOAD Jan 14 23:44:38.286000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.286000 audit: BPF prog-id=178 op=UNLOAD Jan 14 23:44:38.286000 audit[4028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.286000 audit: BPF prog-id=177 op=UNLOAD Jan 14 23:44:38.286000 audit[4028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.286000 audit: BPF prog-id=179 op=LOAD Jan 14 23:44:38.286000 audit[4028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4016 pid=4028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239356230323239666537386539343039333034313062346264643933 Jan 14 23:44:38.331478 containerd[1618]: time="2026-01-14T23:44:38.331326349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74f68b674d-szjw2,Uid:37539e9d-a75a-4f02-a310-797e90f63b91,Namespace:calico-system,Attempt:0,} returns sandbox id \"b95b0229fe78e940930410b4bdd93a2e0daebb26671071e45ab8aecb1b354338\"" Jan 14 23:44:38.339289 containerd[1618]: time="2026-01-14T23:44:38.339246174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:44:38.596000 audit: BPF prog-id=180 op=LOAD Jan 14 23:44:38.596000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff75c3118 a2=98 a3=fffff75c3108 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.596000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.597000 audit: BPF prog-id=180 op=UNLOAD Jan 14 23:44:38.597000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff75c30e8 a3=0 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.597000 audit: BPF prog-id=181 op=LOAD Jan 14 23:44:38.597000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff75c2fc8 a2=74 a3=95 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.597000 audit: BPF prog-id=181 op=UNLOAD Jan 14 23:44:38.597000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.597000 audit: BPF prog-id=182 op=LOAD Jan 14 23:44:38.597000 audit[4113]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff75c2ff8 a2=40 a3=fffff75c3028 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.597000 audit: BPF prog-id=182 op=UNLOAD Jan 14 23:44:38.597000 audit[4113]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff75c3028 items=0 ppid=3930 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.597000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.600000 audit: BPF prog-id=183 op=LOAD Jan 14 23:44:38.600000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe010a088 a2=98 a3=ffffe010a078 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.600000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.601000 audit: BPF prog-id=183 op=UNLOAD Jan 14 23:44:38.601000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe010a058 a3=0 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.601000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.602000 audit: BPF prog-id=184 op=LOAD Jan 14 23:44:38.602000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0109d18 a2=74 a3=95 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.602000 audit: BPF prog-id=184 op=UNLOAD Jan 14 23:44:38.602000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.602000 audit: BPF prog-id=185 op=LOAD Jan 14 23:44:38.602000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0109d78 a2=94 a3=2 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.602000 audit: BPF prog-id=185 op=UNLOAD Jan 14 23:44:38.602000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.602000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.665484 containerd[1618]: time="2026-01-14T23:44:38.665440555Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:38.670414 containerd[1618]: time="2026-01-14T23:44:38.668807067Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:44:38.670594 containerd[1618]: time="2026-01-14T23:44:38.668898430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:38.670712 kubelet[2853]: E0114 23:44:38.670662 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:38.671092 kubelet[2853]: E0114 23:44:38.670728 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:38.674077 kubelet[2853]: E0114 23:44:38.674013 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:38.675956 containerd[1618]: time="2026-01-14T23:44:38.675813301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:44:38.731000 audit: BPF prog-id=186 op=LOAD Jan 14 23:44:38.731000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe0109d38 a2=40 a3=ffffe0109d68 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.731000 audit: BPF prog-id=186 op=UNLOAD Jan 14 23:44:38.731000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe0109d68 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.731000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.741000 audit: BPF prog-id=187 op=LOAD Jan 14 23:44:38.741000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0109d48 a2=94 a3=4 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.741000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.742000 audit: BPF prog-id=187 op=UNLOAD Jan 14 23:44:38.742000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.742000 audit: BPF prog-id=188 op=LOAD Jan 14 23:44:38.742000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe0109b88 a2=94 a3=5 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.742000 audit: BPF prog-id=188 op=UNLOAD Jan 14 23:44:38.742000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.742000 audit: BPF prog-id=189 op=LOAD Jan 14 23:44:38.742000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0109db8 a2=94 a3=6 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.742000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.743000 audit: BPF prog-id=189 op=UNLOAD Jan 14 23:44:38.743000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.743000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.743000 audit: BPF prog-id=190 op=LOAD Jan 14 23:44:38.743000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe0109588 a2=94 a3=83 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.743000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.744000 audit: BPF prog-id=191 op=LOAD Jan 14 23:44:38.744000 audit[4114]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe0109348 a2=94 a3=2 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.744000 audit: BPF prog-id=191 op=UNLOAD Jan 14 23:44:38.744000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.744000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.745000 audit: BPF prog-id=190 op=UNLOAD Jan 14 23:44:38.745000 audit[4114]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3d5cc620 a3=3d5bfb00 items=0 ppid=3930 pid=4114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.745000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.757000 audit: BPF prog-id=192 op=LOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f17398 a2=98 a3=fffff4f17388 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.757000 audit: BPF prog-id=192 op=UNLOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff4f17368 a3=0 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.757000 audit: BPF prog-id=193 op=LOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f17248 a2=74 a3=95 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.757000 audit: BPF prog-id=193 op=UNLOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.757000 audit: BPF prog-id=194 op=LOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff4f17278 a2=40 a3=fffff4f172a8 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.757000 audit: BPF prog-id=194 op=UNLOAD Jan 14 23:44:38.757000 audit[4119]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff4f172a8 items=0 ppid=3930 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.757000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.845612 systemd-networkd[1503]: vxlan.calico: Link UP Jan 14 23:44:38.846432 systemd-networkd[1503]: vxlan.calico: Gained carrier Jan 14 23:44:38.871000 audit: BPF prog-id=195 op=LOAD Jan 14 23:44:38.871000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1e168 a2=98 a3=ffffc7f1e158 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.871000 audit: BPF prog-id=195 op=UNLOAD Jan 14 23:44:38.871000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc7f1e138 a3=0 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=196 op=LOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1de48 a2=74 a3=95 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=196 op=UNLOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=197 op=LOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7f1dea8 a2=94 a3=2 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=197 op=UNLOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=198 op=LOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7f1dd28 a2=40 a3=ffffc7f1dd58 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=198 op=UNLOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc7f1dd58 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.872000 audit: BPF prog-id=199 op=LOAD Jan 14 23:44:38.872000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7f1de78 a2=94 a3=b7 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.874000 audit: BPF prog-id=199 op=UNLOAD Jan 14 23:44:38.874000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.874000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.877000 audit: BPF prog-id=200 op=LOAD Jan 14 23:44:38.877000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7f1d528 a2=94 a3=2 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.877000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.878000 audit: BPF prog-id=200 op=UNLOAD Jan 14 23:44:38.878000 audit[4143]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.878000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.878000 audit: BPF prog-id=201 op=LOAD Jan 14 23:44:38.878000 audit[4143]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7f1d6b8 a2=94 a3=30 items=0 ppid=3930 pid=4143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.878000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:38.882000 audit: BPF prog-id=202 op=LOAD Jan 14 23:44:38.882000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0c1ea78 a2=98 a3=ffffc0c1ea68 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.882000 audit: BPF prog-id=202 op=UNLOAD Jan 14 23:44:38.882000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc0c1ea48 a3=0 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.883000 audit: BPF prog-id=203 op=LOAD Jan 14 23:44:38.883000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0c1e708 a2=74 a3=95 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.883000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.883000 audit: BPF prog-id=203 op=UNLOAD Jan 14 23:44:38.883000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.883000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.883000 audit: BPF prog-id=204 op=LOAD Jan 14 23:44:38.883000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0c1e768 a2=94 a3=2 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.883000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.883000 audit: BPF prog-id=204 op=UNLOAD Jan 14 23:44:38.883000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.883000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.992000 audit: BPF prog-id=205 op=LOAD Jan 14 23:44:38.992000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc0c1e728 a2=40 a3=ffffc0c1e758 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.992000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:38.993000 audit: BPF prog-id=205 op=UNLOAD Jan 14 23:44:38.993000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc0c1e758 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.993000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.004000 audit: BPF prog-id=206 op=LOAD Jan 14 23:44:39.004000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0c1e738 a2=94 a3=4 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.004000 audit: BPF prog-id=206 op=UNLOAD Jan 14 23:44:39.004000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.004000 audit: BPF prog-id=207 op=LOAD Jan 14 23:44:39.004000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0c1e578 a2=94 a3=5 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.004000 audit: BPF prog-id=207 op=UNLOAD Jan 14 23:44:39.004000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.004000 audit: BPF prog-id=208 op=LOAD Jan 14 23:44:39.004000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0c1e7a8 a2=94 a3=6 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.004000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.005000 audit: BPF prog-id=208 op=UNLOAD Jan 14 23:44:39.005000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.005000 audit: BPF prog-id=209 op=LOAD Jan 14 23:44:39.005000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc0c1df78 a2=94 a3=83 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.005000 audit: BPF prog-id=210 op=LOAD Jan 14 23:44:39.005000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc0c1dd38 a2=94 a3=2 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.005000 audit: BPF prog-id=210 op=UNLOAD Jan 14 23:44:39.005000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.005000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.006000 audit: BPF prog-id=209 op=UNLOAD Jan 14 23:44:39.006000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=712620 a3=705b00 items=0 ppid=3930 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.006000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.013000 audit: BPF prog-id=201 op=UNLOAD Jan 14 23:44:39.013000 audit[3930]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000690340 a2=0 a3=0 items=0 ppid=3917 pid=3930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.013000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 23:44:39.014905 containerd[1618]: time="2026-01-14T23:44:39.014843782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:39.018683 containerd[1618]: time="2026-01-14T23:44:39.018244053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:44:39.018683 containerd[1618]: time="2026-01-14T23:44:39.018381658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:39.018871 kubelet[2853]: E0114 23:44:39.018834 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:39.018935 kubelet[2853]: E0114 23:44:39.018880 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:39.020175 kubelet[2853]: E0114 23:44:39.018984 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:39.029972 kubelet[2853]: E0114 23:44:39.029897 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:44:39.089000 audit[4174]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=4174 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.089000 audit[4174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc776b940 a2=0 a3=ffff8b20cfa8 items=0 ppid=3930 pid=4174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.089000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.089000 audit[4175]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4175 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.089000 audit[4175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff69a7fd0 a2=0 a3=ffff85f90fa8 items=0 ppid=3930 pid=4175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.089000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.099000 audit[4173]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4173 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.099000 audit[4173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffc08e5810 a2=0 a3=ffff857e8fa8 items=0 ppid=3930 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.099000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.105000 audit[4178]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4178 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.105000 audit[4178]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffff52cb450 a2=0 a3=ffff9fd54fa8 items=0 ppid=3930 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.105000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.135951 kubelet[2853]: I0114 23:44:39.135268 2853 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7685ab64-5fe9-46e5-bd88-429276e37200" path="/var/lib/kubelet/pods/7685ab64-5fe9-46e5-bd88-429276e37200/volumes" Jan 14 23:44:39.387030 kubelet[2853]: E0114 23:44:39.386844 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:44:39.426000 audit[4187]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:39.426000 audit[4187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe9c69410 a2=0 a3=1 items=0 ppid=2990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:39.434000 audit[4187]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4187 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:39.434000 audit[4187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe9c69410 a2=0 a3=1 items=0 ppid=2990 pid=4187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:39.901784 systemd-networkd[1503]: cali0e1b4b6cdea: Gained IPv6LL Jan 14 23:44:40.120873 containerd[1618]: time="2026-01-14T23:44:40.120806525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-8sg5s,Uid:1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:40.298599 systemd-networkd[1503]: caliefa83c24b8b: Link UP Jan 14 23:44:40.300666 systemd-networkd[1503]: caliefa83c24b8b: Gained carrier Jan 14 23:44:40.322584 containerd[1618]: 2026-01-14 23:44:40.179 [INFO][4192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0 calico-apiserver-84c479b4c5- calico-apiserver 1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a 849 0 2026-01-14 23:44:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84c479b4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 calico-apiserver-84c479b4c5-8sg5s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliefa83c24b8b [] [] }} ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-" Jan 14 23:44:40.322584 containerd[1618]: 2026-01-14 23:44:40.179 [INFO][4192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.322584 containerd[1618]: 2026-01-14 23:44:40.212 [INFO][4202] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" HandleID="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.213 [INFO][4202] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" HandleID="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024af90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"calico-apiserver-84c479b4c5-8sg5s", "timestamp":"2026-01-14 23:44:40.212823207 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.213 [INFO][4202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.213 [INFO][4202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.213 [INFO][4202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.227 [INFO][4202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.242 [INFO][4202] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.251 [INFO][4202] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.258 [INFO][4202] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.322982 containerd[1618]: 2026-01-14 23:44:40.265 [INFO][4202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.265 [INFO][4202] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.268 [INFO][4202] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516 Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.277 [INFO][4202] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.287 [INFO][4202] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.66/26] block=192.168.94.64/26 handle="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.287 [INFO][4202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.66/26] handle="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.287 [INFO][4202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:40.323553 containerd[1618]: 2026-01-14 23:44:40.288 [INFO][4202] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.66/26] IPv6=[] ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" HandleID="k8s-pod-network.c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.323702 containerd[1618]: 2026-01-14 23:44:40.292 [INFO][4192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0", GenerateName:"calico-apiserver-84c479b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84c479b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"calico-apiserver-84c479b4c5-8sg5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefa83c24b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:40.323768 containerd[1618]: 2026-01-14 23:44:40.292 [INFO][4192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.66/32] ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.323768 containerd[1618]: 2026-01-14 23:44:40.292 [INFO][4192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefa83c24b8b ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.323768 containerd[1618]: 2026-01-14 23:44:40.301 [INFO][4192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.324324 containerd[1618]: 2026-01-14 23:44:40.302 [INFO][4192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0", GenerateName:"calico-apiserver-84c479b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84c479b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516", Pod:"calico-apiserver-84c479b4c5-8sg5s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliefa83c24b8b", MAC:"a6:89:09:f6:56:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:40.324385 containerd[1618]: 2026-01-14 23:44:40.319 [INFO][4192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-8sg5s" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--8sg5s-eth0" Jan 14 23:44:40.356000 audit[4217]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4217 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:40.356000 audit[4217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffedbc6410 a2=0 a3=ffff9e234fa8 items=0 ppid=3930 pid=4217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.356000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:40.366510 containerd[1618]: time="2026-01-14T23:44:40.366057458Z" level=info msg="connecting to shim c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516" address="unix:///run/containerd/s/799f5f6236e62f37e78579f5a4a8df1bd5974549b1a608e18d6138ebfbf5eda6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:40.401899 systemd[1]: Started cri-containerd-c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516.scope - libcontainer container c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516. Jan 14 23:44:40.413633 systemd-networkd[1503]: vxlan.calico: Gained IPv6LL Jan 14 23:44:40.421000 audit: BPF prog-id=211 op=LOAD Jan 14 23:44:40.423000 audit: BPF prog-id=212 op=LOAD Jan 14 23:44:40.423000 audit[4238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.423000 audit: BPF prog-id=212 op=UNLOAD Jan 14 23:44:40.423000 audit[4238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.424000 audit: BPF prog-id=213 op=LOAD Jan 14 23:44:40.424000 audit[4238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.424000 audit: BPF prog-id=214 op=LOAD Jan 14 23:44:40.424000 audit[4238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.424000 audit: BPF prog-id=214 op=UNLOAD Jan 14 23:44:40.424000 audit[4238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.424000 audit: BPF prog-id=213 op=UNLOAD Jan 14 23:44:40.424000 audit[4238]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.424000 audit: BPF prog-id=215 op=LOAD Jan 14 23:44:40.424000 audit[4238]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4226 pid=4238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:40.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334393663623236616636633566623133316333633034313062656530 Jan 14 23:44:40.455753 containerd[1618]: time="2026-01-14T23:44:40.455710224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-8sg5s,Uid:1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c496cb26af6c5fb131c3c0410bee0000e1953b46a5f20badbf73672fb3f32516\"" Jan 14 23:44:40.458322 containerd[1618]: time="2026-01-14T23:44:40.458206224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:40.801379 containerd[1618]: time="2026-01-14T23:44:40.801235785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:40.803508 containerd[1618]: time="2026-01-14T23:44:40.803289731Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:40.803508 containerd[1618]: time="2026-01-14T23:44:40.803297611Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:40.804425 kubelet[2853]: E0114 23:44:40.803952 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:40.804425 kubelet[2853]: E0114 23:44:40.804009 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:40.804425 kubelet[2853]: E0114 23:44:40.804104 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:40.804425 kubelet[2853]: E0114 23:44:40.804151 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:44:41.123244 containerd[1618]: time="2026-01-14T23:44:41.122692302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mbwp8,Uid:375f636a-b14c-4107-87ee-0c0815e9a9c0,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:41.282748 systemd-networkd[1503]: cali400a7814b23: Link UP Jan 14 23:44:41.283725 systemd-networkd[1503]: cali400a7814b23: Gained carrier Jan 14 23:44:41.311497 containerd[1618]: 2026-01-14 23:44:41.188 [INFO][4265] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0 goldmane-7c778bb748- calico-system 375f636a-b14c-4107-87ee-0c0815e9a9c0 850 0 2026-01-14 23:44:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 goldmane-7c778bb748-mbwp8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali400a7814b23 [] [] }} ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-" Jan 14 23:44:41.311497 containerd[1618]: 2026-01-14 23:44:41.189 [INFO][4265] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.311497 containerd[1618]: 2026-01-14 23:44:41.221 [INFO][4277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" HandleID="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.221 [INFO][4277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" HandleID="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"goldmane-7c778bb748-mbwp8", "timestamp":"2026-01-14 23:44:41.221234979 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.221 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.221 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.221 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.232 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.240 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.247 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.249 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.311754 containerd[1618]: 2026-01-14 23:44:41.253 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.253 [INFO][4277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.255 [INFO][4277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595 Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.264 [INFO][4277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.272 [INFO][4277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.67/26] block=192.168.94.64/26 handle="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.272 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.67/26] handle="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.272 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:41.312626 containerd[1618]: 2026-01-14 23:44:41.272 [INFO][4277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.67/26] IPv6=[] ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" HandleID="k8s-pod-network.d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.313025 containerd[1618]: 2026-01-14 23:44:41.276 [INFO][4265] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"375f636a-b14c-4107-87ee-0c0815e9a9c0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"goldmane-7c778bb748-mbwp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali400a7814b23", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:41.313092 containerd[1618]: 2026-01-14 23:44:41.276 [INFO][4265] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.67/32] ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.313092 containerd[1618]: 2026-01-14 23:44:41.277 [INFO][4265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali400a7814b23 ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.313092 containerd[1618]: 2026-01-14 23:44:41.284 [INFO][4265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.313154 containerd[1618]: 2026-01-14 23:44:41.285 [INFO][4265] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"375f636a-b14c-4107-87ee-0c0815e9a9c0", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595", Pod:"goldmane-7c778bb748-mbwp8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.94.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali400a7814b23", MAC:"2e:6f:95:b5:99:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:41.313200 containerd[1618]: 2026-01-14 23:44:41.308 [INFO][4265] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" Namespace="calico-system" Pod="goldmane-7c778bb748-mbwp8" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-goldmane--7c778bb748--mbwp8-eth0" Jan 14 23:44:41.324000 audit[4291]: NETFILTER_CFG table=filter:126 family=2 entries=48 op=nft_register_chain pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:41.327100 kernel: kauditd_printk_skb: 256 callbacks suppressed Jan 14 23:44:41.327989 kernel: audit: type=1325 audit(1768434281.324:669): table=filter:126 family=2 entries=48 op=nft_register_chain pid=4291 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:41.324000 audit[4291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffd83f01b0 a2=0 a3=ffff93ab1fa8 items=0 ppid=3930 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.324000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:41.336802 kernel: audit: type=1300 audit(1768434281.324:669): arch=c00000b7 syscall=211 success=yes exit=26368 a0=3 a1=ffffd83f01b0 a2=0 a3=ffff93ab1fa8 items=0 ppid=3930 pid=4291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.336934 kernel: audit: type=1327 audit(1768434281.324:669): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:41.345494 containerd[1618]: time="2026-01-14T23:44:41.345436226Z" level=info msg="connecting to shim d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595" address="unix:///run/containerd/s/4642c9033d43d1c8b4a83c42f2a7d39bd0df4252841fbe68901aaa10f2c24a89" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:41.377021 systemd[1]: Started cri-containerd-d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595.scope - libcontainer container d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595. Jan 14 23:44:41.395900 kubelet[2853]: E0114 23:44:41.394144 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:44:41.411000 audit: BPF prog-id=216 op=LOAD Jan 14 23:44:41.414431 kernel: audit: type=1334 audit(1768434281.411:670): prog-id=216 op=LOAD Jan 14 23:44:41.414579 kernel: audit: type=1334 audit(1768434281.413:671): prog-id=217 op=LOAD Jan 14 23:44:41.413000 audit: BPF prog-id=217 op=LOAD Jan 14 23:44:41.413000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.418022 kernel: audit: type=1300 audit(1768434281.413:671): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.422668 kernel: audit: type=1327 audit(1768434281.413:671): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.413000 audit: BPF prog-id=217 op=UNLOAD Jan 14 23:44:41.413000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.428957 kernel: audit: type=1334 audit(1768434281.413:672): prog-id=217 op=UNLOAD Jan 14 23:44:41.429062 kernel: audit: type=1300 audit(1768434281.413:672): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.433589 kernel: audit: type=1327 audit(1768434281.413:672): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.413000 audit: BPF prog-id=218 op=LOAD Jan 14 23:44:41.413000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.413000 audit: BPF prog-id=219 op=LOAD Jan 14 23:44:41.413000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.413000 audit: BPF prog-id=219 op=UNLOAD Jan 14 23:44:41.413000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.414000 audit: BPF prog-id=218 op=UNLOAD Jan 14 23:44:41.414000 audit[4312]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.414000 audit: BPF prog-id=220 op=LOAD Jan 14 23:44:41.414000 audit[4312]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4301 pid=4312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433323931363263323366313063306264663634346331626130343965 Jan 14 23:44:41.449000 audit[4332]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:41.449000 audit[4332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff0198860 a2=0 a3=1 items=0 ppid=2990 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.449000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:41.455000 audit[4332]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4332 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:41.455000 audit[4332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff0198860 a2=0 a3=1 items=0 ppid=2990 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:41.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:41.471505 containerd[1618]: time="2026-01-14T23:44:41.471455932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mbwp8,Uid:375f636a-b14c-4107-87ee-0c0815e9a9c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d329162c23f10c0bdf644c1ba049e62a610399f74ea7259bff9d19778652b595\"" Jan 14 23:44:41.474225 containerd[1618]: time="2026-01-14T23:44:41.474174498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:44:41.809869 containerd[1618]: time="2026-01-14T23:44:41.809616226Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:41.815622 containerd[1618]: time="2026-01-14T23:44:41.815547454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:44:41.815622 containerd[1618]: time="2026-01-14T23:44:41.815563254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:41.815942 kubelet[2853]: E0114 23:44:41.815871 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:41.815942 kubelet[2853]: E0114 23:44:41.815932 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:41.816289 kubelet[2853]: E0114 23:44:41.816016 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:41.816289 kubelet[2853]: E0114 23:44:41.816078 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:44:41.821598 systemd-networkd[1503]: caliefa83c24b8b: Gained IPv6LL Jan 14 23:44:42.121210 containerd[1618]: time="2026-01-14T23:44:42.121071533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-75mhg,Uid:7c662993-1a64-424b-835f-c3688665f281,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:42.289698 systemd-networkd[1503]: cali4ee8e9796e8: Link UP Jan 14 23:44:42.290926 systemd-networkd[1503]: cali4ee8e9796e8: Gained carrier Jan 14 23:44:42.315570 containerd[1618]: 2026-01-14 23:44:42.185 [INFO][4340] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0 calico-apiserver-84c479b4c5- calico-apiserver 7c662993-1a64-424b-835f-c3688665f281 846 0 2026-01-14 23:44:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84c479b4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 calico-apiserver-84c479b4c5-75mhg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ee8e9796e8 [] [] }} ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-" Jan 14 23:44:42.315570 containerd[1618]: 2026-01-14 23:44:42.186 [INFO][4340] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.315570 containerd[1618]: 2026-01-14 23:44:42.221 [INFO][4352] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" HandleID="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.222 [INFO][4352] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" HandleID="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"calico-apiserver-84c479b4c5-75mhg", "timestamp":"2026-01-14 23:44:42.221053723 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.222 [INFO][4352] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.222 [INFO][4352] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.223 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.238 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.244 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.252 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.255 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.316111 containerd[1618]: 2026-01-14 23:44:42.259 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.259 [INFO][4352] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.262 [INFO][4352] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.269 [INFO][4352] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.281 [INFO][4352] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.68/26] block=192.168.94.64/26 handle="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.281 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.68/26] handle="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.281 [INFO][4352] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:42.317571 containerd[1618]: 2026-01-14 23:44:42.281 [INFO][4352] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.68/26] IPv6=[] ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" HandleID="k8s-pod-network.493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.317720 containerd[1618]: 2026-01-14 23:44:42.284 [INFO][4340] cni-plugin/k8s.go 418: Populated endpoint ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0", GenerateName:"calico-apiserver-84c479b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c662993-1a64-424b-835f-c3688665f281", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84c479b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"calico-apiserver-84c479b4c5-75mhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ee8e9796e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:42.317785 containerd[1618]: 2026-01-14 23:44:42.284 [INFO][4340] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.68/32] ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.317785 containerd[1618]: 2026-01-14 23:44:42.284 [INFO][4340] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ee8e9796e8 ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.317785 containerd[1618]: 2026-01-14 23:44:42.288 [INFO][4340] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.317845 containerd[1618]: 2026-01-14 23:44:42.289 [INFO][4340] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0", GenerateName:"calico-apiserver-84c479b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"7c662993-1a64-424b-835f-c3688665f281", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84c479b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c", Pod:"calico-apiserver-84c479b4c5-75mhg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.94.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ee8e9796e8", MAC:"86:c6:4f:2c:02:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:42.317892 containerd[1618]: 2026-01-14 23:44:42.310 [INFO][4340] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" Namespace="calico-apiserver" Pod="calico-apiserver-84c479b4c5-75mhg" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--apiserver--84c479b4c5--75mhg-eth0" Jan 14 23:44:42.331000 audit[4366]: NETFILTER_CFG table=filter:129 family=2 entries=51 op=nft_register_chain pid=4366 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:42.331000 audit[4366]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27116 a0=3 a1=fffff6efc770 a2=0 a3=ffffbd2a2fa8 items=0 ppid=3930 pid=4366 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.331000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:42.345747 containerd[1618]: time="2026-01-14T23:44:42.345702279Z" level=info msg="connecting to shim 493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c" address="unix:///run/containerd/s/f1712d827fb4f76b130aa515fb48354556a8350c17ebbdf5d9b9a0dde701fd0c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:42.376811 systemd[1]: Started cri-containerd-493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c.scope - libcontainer container 493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c. Jan 14 23:44:42.402750 kubelet[2853]: E0114 23:44:42.402647 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:44:42.404262 kubelet[2853]: E0114 23:44:42.404204 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:44:42.446000 audit: BPF prog-id=221 op=LOAD Jan 14 23:44:42.447000 audit: BPF prog-id=222 op=LOAD Jan 14 23:44:42.447000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.447000 audit: BPF prog-id=222 op=UNLOAD Jan 14 23:44:42.447000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.447000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.448000 audit: BPF prog-id=223 op=LOAD Jan 14 23:44:42.448000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.448000 audit: BPF prog-id=224 op=LOAD Jan 14 23:44:42.448000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.448000 audit: BPF prog-id=224 op=UNLOAD Jan 14 23:44:42.448000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.448000 audit: BPF prog-id=223 op=UNLOAD Jan 14 23:44:42.448000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.448000 audit: BPF prog-id=225 op=LOAD Jan 14 23:44:42.448000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4376 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.448000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439333237306663336335336536613438336361373437613338326338 Jan 14 23:44:42.494000 audit[4407]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:42.494000 audit[4407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe2a7e150 a2=0 a3=1 items=0 ppid=2990 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:42.509000 audit[4407]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4407 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:42.509000 audit[4407]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe2a7e150 a2=0 a3=1 items=0 ppid=2990 pid=4407 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:42.520843 containerd[1618]: time="2026-01-14T23:44:42.520721603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84c479b4c5-75mhg,Uid:7c662993-1a64-424b-835f-c3688665f281,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"493270fc3c53e6a483ca747a382c81583b1113c9c16d55dc4284ddc30b8dc21c\"" Jan 14 23:44:42.535524 containerd[1618]: time="2026-01-14T23:44:42.535462821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:42.845625 systemd-networkd[1503]: cali400a7814b23: Gained IPv6LL Jan 14 23:44:42.889104 containerd[1618]: time="2026-01-14T23:44:42.888945855Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:42.891444 containerd[1618]: time="2026-01-14T23:44:42.891205365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:42.891444 containerd[1618]: time="2026-01-14T23:44:42.891327289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:42.893575 kubelet[2853]: E0114 23:44:42.891816 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:42.893575 kubelet[2853]: E0114 23:44:42.891882 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:42.893575 kubelet[2853]: E0114 23:44:42.891959 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:42.893575 kubelet[2853]: E0114 23:44:42.891992 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:44:43.123520 containerd[1618]: time="2026-01-14T23:44:43.123064316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k7q5h,Uid:a5515c41-dbb8-4ede-a0b7-2e7883a18a57,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:43.127192 containerd[1618]: time="2026-01-14T23:44:43.127038157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lppt6,Uid:b935d12e-83f9-44f9-b2c0-7537aed4125a,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:43.129330 containerd[1618]: time="2026-01-14T23:44:43.129215504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s4g4,Uid:35e31491-f658-475f-aa1a-411d37af2884,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:43.133731 containerd[1618]: time="2026-01-14T23:44:43.133693241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc4b59c4-h9rl4,Uid:0595d5fe-8d8f-4e95-8e85-0c22f59bd781,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:43.415139 kubelet[2853]: E0114 23:44:43.414947 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:44:43.415816 kubelet[2853]: E0114 23:44:43.415732 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:44:43.477575 systemd-networkd[1503]: calie2aa41ff40d: Link UP Jan 14 23:44:43.481171 systemd-networkd[1503]: calie2aa41ff40d: Gained carrier Jan 14 23:44:43.487174 systemd-networkd[1503]: cali4ee8e9796e8: Gained IPv6LL Jan 14 23:44:43.533000 audit[4500]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:43.533000 audit[4500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcfc0aaa0 a2=0 a3=1 items=0 ppid=2990 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:43.544673 containerd[1618]: 2026-01-14 23:44:43.275 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0 coredns-66bc5c9577- kube-system a5515c41-dbb8-4ede-a0b7-2e7883a18a57 844 0 2026-01-14 23:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 coredns-66bc5c9577-k7q5h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2aa41ff40d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-" Jan 14 23:44:43.544673 containerd[1618]: 2026-01-14 23:44:43.278 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.544673 containerd[1618]: 2026-01-14 23:44:43.330 [INFO][4470] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" HandleID="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.331 [INFO][4470] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" HandleID="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"coredns-66bc5c9577-k7q5h", "timestamp":"2026-01-14 23:44:43.330596228 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.332 [INFO][4470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.332 [INFO][4470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.335 [INFO][4470] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.359 [INFO][4470] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.380 [INFO][4470] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.395 [INFO][4470] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.399 [INFO][4470] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545155 containerd[1618]: 2026-01-14 23:44:43.406 [INFO][4470] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.406 [INFO][4470] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.411 [INFO][4470] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565 Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.430 [INFO][4470] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.453 [INFO][4470] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.69/26] block=192.168.94.64/26 handle="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.455 [INFO][4470] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.69/26] handle="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.455 [INFO][4470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.545367 containerd[1618]: 2026-01-14 23:44:43.455 [INFO][4470] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.69/26] IPv6=[] ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" HandleID="k8s-pod-network.58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.547038 containerd[1618]: 2026-01-14 23:44:43.465 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5515c41-dbb8-4ede-a0b7-2e7883a18a57", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"coredns-66bc5c9577-k7q5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2aa41ff40d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.547038 containerd[1618]: 2026-01-14 23:44:43.468 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.69/32] ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.547038 containerd[1618]: 2026-01-14 23:44:43.468 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2aa41ff40d ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.547038 containerd[1618]: 2026-01-14 23:44:43.481 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.547038 containerd[1618]: 2026-01-14 23:44:43.486 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a5515c41-dbb8-4ede-a0b7-2e7883a18a57", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565", Pod:"coredns-66bc5c9577-k7q5h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2aa41ff40d", MAC:"d2:d6:8c:e6:38:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.547631 containerd[1618]: 2026-01-14 23:44:43.541 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" Namespace="kube-system" Pod="coredns-66bc5c9577-k7q5h" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--k7q5h-eth0" Jan 14 23:44:43.537000 audit[4500]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4500 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:43.537000 audit[4500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcfc0aaa0 a2=0 a3=1 items=0 ppid=2990 pid=4500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:43.574337 containerd[1618]: time="2026-01-14T23:44:43.574255926Z" level=info msg="connecting to shim 58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565" address="unix:///run/containerd/s/63e0ee0b34f673cd00209e8e5d675e4a250e0d70d4ec924d0c592b0086cd5e20" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:43.620692 systemd[1]: Started cri-containerd-58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565.scope - libcontainer container 58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565. Jan 14 23:44:43.631056 systemd-networkd[1503]: cali6b48f2b7e12: Link UP Jan 14 23:44:43.632516 systemd-networkd[1503]: cali6b48f2b7e12: Gained carrier Jan 14 23:44:43.661000 audit: BPF prog-id=226 op=LOAD Jan 14 23:44:43.663000 audit: BPF prog-id=227 op=LOAD Jan 14 23:44:43.663000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=227 op=UNLOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=228 op=LOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=229 op=LOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=229 op=UNLOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=228 op=UNLOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.664000 audit: BPF prog-id=230 op=LOAD Jan 14 23:44:43.664000 audit[4524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4513 pid=4524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.664000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538663239666134633931333933303034623931636133656231326637 Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.297 [INFO][4439] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0 csi-node-driver- calico-system 35e31491-f658-475f-aa1a-411d37af2884 746 0 2026-01-14 23:44:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 csi-node-driver-7s4g4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6b48f2b7e12 [] [] }} ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.299 [INFO][4439] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.398 [INFO][4482] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" HandleID="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.400 [INFO][4482] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" HandleID="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"csi-node-driver-7s4g4", "timestamp":"2026-01-14 23:44:43.39893624 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.400 [INFO][4482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.456 [INFO][4482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.456 [INFO][4482] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.515 [INFO][4482] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.539 [INFO][4482] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.556 [INFO][4482] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.564 [INFO][4482] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.571 [INFO][4482] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.571 [INFO][4482] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.574 [INFO][4482] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.586 [INFO][4482] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.612 [INFO][4482] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.70/26] block=192.168.94.64/26 handle="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.612 [INFO][4482] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.70/26] handle="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.614 [INFO][4482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.678635 containerd[1618]: 2026-01-14 23:44:43.614 [INFO][4482] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.70/26] IPv6=[] ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" HandleID="k8s-pod-network.9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.626 [INFO][4439] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35e31491-f658-475f-aa1a-411d37af2884", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"csi-node-driver-7s4g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b48f2b7e12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.626 [INFO][4439] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.70/32] ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.627 [INFO][4439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b48f2b7e12 ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.642 [INFO][4439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.644 [INFO][4439] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"35e31491-f658-475f-aa1a-411d37af2884", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be", Pod:"csi-node-driver-7s4g4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6b48f2b7e12", MAC:"ea:ad:80:be:52:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.679324 containerd[1618]: 2026-01-14 23:44:43.668 [INFO][4439] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" Namespace="calico-system" Pod="csi-node-driver-7s4g4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-csi--node--driver--7s4g4-eth0" Jan 14 23:44:43.681000 audit[4548]: NETFILTER_CFG table=filter:134 family=2 entries=56 op=nft_register_chain pid=4548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:43.681000 audit[4548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27764 a0=3 a1=ffffcc926a30 a2=0 a3=ffffafec6fa8 items=0 ppid=3930 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.681000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:43.738387 containerd[1618]: time="2026-01-14T23:44:43.738319588Z" level=info msg="connecting to shim 9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be" address="unix:///run/containerd/s/1272bc42eec1dc19472ed8e9ed3b663456cce4630d4458fffe472362bf1c01a4" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:43.776959 systemd-networkd[1503]: caliebdda38ded2: Link UP Jan 14 23:44:43.784898 systemd-networkd[1503]: caliebdda38ded2: Gained carrier Jan 14 23:44:43.828837 systemd[1]: Started cri-containerd-9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be.scope - libcontainer container 9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be. Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.272 [INFO][4415] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0 coredns-66bc5c9577- kube-system b935d12e-83f9-44f9-b2c0-7537aed4125a 848 0 2026-01-14 23:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 coredns-66bc5c9577-lppt6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebdda38ded2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.278 [INFO][4415] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.442 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" HandleID="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.442 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" HandleID="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9ea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"coredns-66bc5c9577-lppt6", "timestamp":"2026-01-14 23:44:43.442183884 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.447 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.619 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.619 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.647 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.658 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.676 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.684 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.692 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.692 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.697 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024 Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.722 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.744 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.71/26] block=192.168.94.64/26 handle="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.744 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.71/26] handle="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.744 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.830380 containerd[1618]: 2026-01-14 23:44:43.745 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.71/26] IPv6=[] ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" HandleID="k8s-pod-network.ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830970 containerd[1618]: 2026-01-14 23:44:43.750 [INFO][4415] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b935d12e-83f9-44f9-b2c0-7537aed4125a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"coredns-66bc5c9577-lppt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebdda38ded2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.830970 containerd[1618]: 2026-01-14 23:44:43.752 [INFO][4415] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.71/32] ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830970 containerd[1618]: 2026-01-14 23:44:43.753 [INFO][4415] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebdda38ded2 ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830970 containerd[1618]: 2026-01-14 23:44:43.785 [INFO][4415] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.830970 containerd[1618]: 2026-01-14 23:44:43.786 [INFO][4415] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b935d12e-83f9-44f9-b2c0-7537aed4125a", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024", Pod:"coredns-66bc5c9577-lppt6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebdda38ded2", MAC:"fe:6c:e3:92:cb:4a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.832388 containerd[1618]: 2026-01-14 23:44:43.816 [INFO][4415] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" Namespace="kube-system" Pod="coredns-66bc5c9577-lppt6" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-coredns--66bc5c9577--lppt6-eth0" Jan 14 23:44:43.888000 audit[4607]: NETFILTER_CFG table=filter:135 family=2 entries=44 op=nft_register_chain pid=4607 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:43.888000 audit[4607]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21920 a0=3 a1=fffffd3ee380 a2=0 a3=ffffaade6fa8 items=0 ppid=3930 pid=4607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.888000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:43.890517 containerd[1618]: time="2026-01-14T23:44:43.890223238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k7q5h,Uid:a5515c41-dbb8-4ede-a0b7-2e7883a18a57,Namespace:kube-system,Attempt:0,} returns sandbox id \"58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565\"" Jan 14 23:44:43.898000 audit: BPF prog-id=231 op=LOAD Jan 14 23:44:43.904094 containerd[1618]: time="2026-01-14T23:44:43.904041501Z" level=info msg="CreateContainer within sandbox \"58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:44:43.907421 containerd[1618]: time="2026-01-14T23:44:43.906205527Z" level=info msg="connecting to shim ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024" address="unix:///run/containerd/s/d9aa430eb2f61e37ba724a2df741d9064373a687451f522f1825820a3f310ba8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:43.913000 audit: BPF prog-id=232 op=LOAD Jan 14 23:44:43.913000 audit[4581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001da180 a2=98 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.917000 audit: BPF prog-id=232 op=UNLOAD Jan 14 23:44:43.917000 audit[4581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.917000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.921000 audit: BPF prog-id=233 op=LOAD Jan 14 23:44:43.921000 audit[4581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001da3e8 a2=98 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.926885 systemd-networkd[1503]: cali2516fd8f465: Link UP Jan 14 23:44:43.928000 audit: BPF prog-id=234 op=LOAD Jan 14 23:44:43.928000 audit[4581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001da168 a2=98 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.928000 audit: BPF prog-id=234 op=UNLOAD Jan 14 23:44:43.928000 audit[4581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.934000 audit: BPF prog-id=233 op=UNLOAD Jan 14 23:44:43.934000 audit[4581]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.937323 systemd-networkd[1503]: cali2516fd8f465: Gained carrier Jan 14 23:44:43.935000 audit: BPF prog-id=235 op=LOAD Jan 14 23:44:43.935000 audit[4581]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001da648 a2=98 a3=0 items=0 ppid=4567 pid=4581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3964323930323032383562356132346532323461396461343839323730 Jan 14 23:44:43.957022 containerd[1618]: time="2026-01-14T23:44:43.956438305Z" level=info msg="Container 3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:43.978040 containerd[1618]: time="2026-01-14T23:44:43.977899801Z" level=info msg="CreateContainer within sandbox \"58f29fa4c91393004b91ca3eb12f78a45c0c30c0d60c48d7cdfe390762ebb565\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8\"" Jan 14 23:44:43.982428 containerd[1618]: time="2026-01-14T23:44:43.982123531Z" level=info msg="StartContainer for \"3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8\"" Jan 14 23:44:43.992750 systemd[1]: Started cri-containerd-ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024.scope - libcontainer container ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024. Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.309 [INFO][4444] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0 calico-kube-controllers-7bfc4b59c4- calico-system 0595d5fe-8d8f-4e95-8e85-0c22f59bd781 847 0 2026-01-14 23:44:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bfc4b59c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-n-ec6f9a8ce8 calico-kube-controllers-7bfc4b59c4-h9rl4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2516fd8f465 [] [] }} ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.309 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.470 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" HandleID="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.478 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" HandleID="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003285c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-ec6f9a8ce8", "pod":"calico-kube-controllers-7bfc4b59c4-h9rl4", "timestamp":"2026-01-14 23:44:43.470260223 +0000 UTC"}, Hostname:"ci-4515-1-0-n-ec6f9a8ce8", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.478 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.744 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.747 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-ec6f9a8ce8' Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.780 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.805 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.833 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.843 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.849 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.64/26 host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.850 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.94.64/26 handle="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.857 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.866 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.94.64/26 handle="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.888 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.94.72/26] block=192.168.94.64/26 handle="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.890 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.72/26] handle="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" host="ci-4515-1-0-n-ec6f9a8ce8" Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.890 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.995256 containerd[1618]: 2026-01-14 23:44:43.890 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.94.72/26] IPv6=[] ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" HandleID="k8s-pod-network.15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Workload="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.904 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0", GenerateName:"calico-kube-controllers-7bfc4b59c4-", Namespace:"calico-system", SelfLink:"", UID:"0595d5fe-8d8f-4e95-8e85-0c22f59bd781", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc4b59c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"", Pod:"calico-kube-controllers-7bfc4b59c4-h9rl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2516fd8f465", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.904 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.72/32] ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.904 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2516fd8f465 ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.942 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.945 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0", GenerateName:"calico-kube-controllers-7bfc4b59c4-", Namespace:"calico-system", SelfLink:"", UID:"0595d5fe-8d8f-4e95-8e85-0c22f59bd781", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bfc4b59c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-ec6f9a8ce8", ContainerID:"15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e", Pod:"calico-kube-controllers-7bfc4b59c4-h9rl4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2516fd8f465", MAC:"52:c8:17:98:35:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.995917 containerd[1618]: 2026-01-14 23:44:43.973 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" Namespace="calico-system" Pod="calico-kube-controllers-7bfc4b59c4-h9rl4" WorkloadEndpoint="ci--4515--1--0--n--ec6f9a8ce8-k8s-calico--kube--controllers--7bfc4b59c4--h9rl4-eth0" Jan 14 23:44:44.009791 containerd[1618]: time="2026-01-14T23:44:44.009679090Z" level=info msg="connecting to shim 3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8" address="unix:///run/containerd/s/63e0ee0b34f673cd00209e8e5d675e4a250e0d70d4ec924d0c592b0086cd5e20" protocol=ttrpc version=3 Jan 14 23:44:44.034000 audit: BPF prog-id=236 op=LOAD Jan 14 23:44:44.034000 audit: BPF prog-id=237 op=LOAD Jan 14 23:44:44.034000 audit[4630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.034000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.035000 audit: BPF prog-id=237 op=UNLOAD Jan 14 23:44:44.035000 audit[4630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.035000 audit: BPF prog-id=238 op=LOAD Jan 14 23:44:44.035000 audit[4630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.036000 audit: BPF prog-id=239 op=LOAD Jan 14 23:44:44.038000 audit[4658]: NETFILTER_CFG table=filter:136 family=2 entries=36 op=nft_register_chain pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:44.036000 audit[4630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.036000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.038000 audit: BPF prog-id=239 op=UNLOAD Jan 14 23:44:44.038000 audit[4630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.038000 audit: BPF prog-id=238 op=UNLOAD Jan 14 23:44:44.038000 audit[4630]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.039000 audit: BPF prog-id=240 op=LOAD Jan 14 23:44:44.039000 audit[4630]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=4618 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.039000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361393431346331643639663530643061363130643037623663313361 Jan 14 23:44:44.038000 audit[4658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19176 a0=3 a1=ffffd107e380 a2=0 a3=ffffb4a03fa8 items=0 ppid=3930 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.038000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:44.060728 containerd[1618]: time="2026-01-14T23:44:44.060232254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7s4g4,Uid:35e31491-f658-475f-aa1a-411d37af2884,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d29020285b5a24e224a9da48927020b0ffc3e3e2b21696abbd12d34f25894be\"" Jan 14 23:44:44.072228 containerd[1618]: time="2026-01-14T23:44:44.071565956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:44:44.073506 systemd[1]: Started cri-containerd-3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8.scope - libcontainer container 3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8. Jan 14 23:44:44.111625 containerd[1618]: time="2026-01-14T23:44:44.111577682Z" level=info msg="connecting to shim 15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e" address="unix:///run/containerd/s/b352dea8e189c39c8f05cbdd909e9229b70e3c42c5f7cb0a23b04b6c2cfe1f86" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:44.126000 audit: BPF prog-id=241 op=LOAD Jan 14 23:44:44.128000 audit: BPF prog-id=242 op=LOAD Jan 14 23:44:44.128000 audit[4660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228180 a2=98 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.128000 audit: BPF prog-id=242 op=UNLOAD Jan 14 23:44:44.128000 audit[4660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.128000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.129000 audit: BPF prog-id=243 op=LOAD Jan 14 23:44:44.130071 containerd[1618]: time="2026-01-14T23:44:44.129695588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lppt6,Uid:b935d12e-83f9-44f9-b2c0-7537aed4125a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024\"" Jan 14 23:44:44.129000 audit[4660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002283e8 a2=98 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.130000 audit: BPF prog-id=244 op=LOAD Jan 14 23:44:44.130000 audit[4660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000228168 a2=98 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.130000 audit: BPF prog-id=244 op=UNLOAD Jan 14 23:44:44.130000 audit[4660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.130000 audit: BPF prog-id=243 op=UNLOAD Jan 14 23:44:44.130000 audit[4660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.130000 audit: BPF prog-id=245 op=LOAD Jan 14 23:44:44.133000 audit[4710]: NETFILTER_CFG table=filter:137 family=2 entries=48 op=nft_register_chain pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:44.130000 audit[4660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000228648 a2=98 a3=0 items=0 ppid=4513 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366383031393861646637653066653966653535373731336364303062 Jan 14 23:44:44.133000 audit[4710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23108 a0=3 a1=fffffddb7d00 a2=0 a3=ffffa5fe8fa8 items=0 ppid=3930 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.133000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:44.145093 containerd[1618]: time="2026-01-14T23:44:44.144647599Z" level=info msg="CreateContainer within sandbox \"ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:44:44.182650 containerd[1618]: time="2026-01-14T23:44:44.181960724Z" level=info msg="Container 992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:44.193275 systemd[1]: Started cri-containerd-15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e.scope - libcontainer container 15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e. Jan 14 23:44:44.214662 containerd[1618]: time="2026-01-14T23:44:44.214621549Z" level=info msg="StartContainer for \"3f80198adf7e0fe9fe557713cd00b3cb18beb06d75a1992ff7c8946b085ef5d8\" returns successfully" Jan 14 23:44:44.221469 containerd[1618]: time="2026-01-14T23:44:44.221346351Z" level=info msg="CreateContainer within sandbox \"ca9414c1d69f50d0a610d07b6c13ae19cce742f24811594d86f2064995158024\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4\"" Jan 14 23:44:44.223689 containerd[1618]: time="2026-01-14T23:44:44.223643621Z" level=info msg="StartContainer for \"992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4\"" Jan 14 23:44:44.225148 containerd[1618]: time="2026-01-14T23:44:44.225099104Z" level=info msg="connecting to shim 992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4" address="unix:///run/containerd/s/d9aa430eb2f61e37ba724a2df741d9064373a687451f522f1825820a3f310ba8" protocol=ttrpc version=3 Jan 14 23:44:44.274008 systemd[1]: Started cri-containerd-992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4.scope - libcontainer container 992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4. Jan 14 23:44:44.309000 audit: BPF prog-id=246 op=LOAD Jan 14 23:44:44.310000 audit: BPF prog-id=247 op=LOAD Jan 14 23:44:44.310000 audit[4713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.310000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.311000 audit: BPF prog-id=247 op=UNLOAD Jan 14 23:44:44.311000 audit[4713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.311000 audit: BPF prog-id=248 op=LOAD Jan 14 23:44:44.311000 audit[4713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.311000 audit: BPF prog-id=249 op=LOAD Jan 14 23:44:44.311000 audit[4713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.312000 audit: BPF prog-id=249 op=UNLOAD Jan 14 23:44:44.312000 audit[4713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.312000 audit: BPF prog-id=248 op=UNLOAD Jan 14 23:44:44.312000 audit[4713]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.312000 audit: BPF prog-id=250 op=LOAD Jan 14 23:44:44.312000 audit[4713]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4701 pid=4713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135636130636234333862663332656262666462383465633632376132 Jan 14 23:44:44.326000 audit: BPF prog-id=251 op=LOAD Jan 14 23:44:44.327000 audit: BPF prog-id=252 op=LOAD Jan 14 23:44:44.327000 audit[4743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.327000 audit: BPF prog-id=252 op=UNLOAD Jan 14 23:44:44.327000 audit[4743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.327000 audit: BPF prog-id=253 op=LOAD Jan 14 23:44:44.327000 audit[4743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.327000 audit: BPF prog-id=254 op=LOAD Jan 14 23:44:44.327000 audit[4743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.327000 audit: BPF prog-id=254 op=UNLOAD Jan 14 23:44:44.327000 audit[4743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.327000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.328000 audit: BPF prog-id=253 op=UNLOAD Jan 14 23:44:44.328000 audit[4743]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.328000 audit: BPF prog-id=255 op=LOAD Jan 14 23:44:44.328000 audit[4743]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4618 pid=4743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.328000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939326234646432316539653562336464663536376665616233393562 Jan 14 23:44:44.388892 containerd[1618]: time="2026-01-14T23:44:44.388587593Z" level=info msg="StartContainer for \"992b4dd21e9e5b3ddf567feab395befdaa78d6b9312a3e602af50c8d7477cee4\" returns successfully" Jan 14 23:44:44.406855 containerd[1618]: time="2026-01-14T23:44:44.406784502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bfc4b59c4-h9rl4,Uid:0595d5fe-8d8f-4e95-8e85-0c22f59bd781,Namespace:calico-system,Attempt:0,} returns sandbox id \"15ca0cb438bf32ebbfdb84ec627a2629a4186859cde8a644614be244cc4a406e\"" Jan 14 23:44:44.417080 containerd[1618]: time="2026-01-14T23:44:44.416813684Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:44.420065 containerd[1618]: time="2026-01-14T23:44:44.419647610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:44:44.420065 containerd[1618]: time="2026-01-14T23:44:44.419731852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:44.421988 kubelet[2853]: E0114 23:44:44.421386 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:44.421988 kubelet[2853]: E0114 23:44:44.421461 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:44.424039 kubelet[2853]: E0114 23:44:44.423042 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:44.424147 containerd[1618]: time="2026-01-14T23:44:44.423438204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:44:44.440735 kubelet[2853]: E0114 23:44:44.440364 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:44:44.465432 kubelet[2853]: I0114 23:44:44.463953 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k7q5h" podStartSLOduration=45.463933905 podStartE2EDuration="45.463933905s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:44.459675657 +0000 UTC m=+51.457203764" watchObservedRunningTime="2026-01-14 23:44:44.463933905 +0000 UTC m=+51.461462012" Jan 14 23:44:44.534127 kubelet[2853]: I0114 23:44:44.533771 2853 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lppt6" podStartSLOduration=45.53374461 podStartE2EDuration="45.53374461s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:44.531166532 +0000 UTC m=+51.528694679" watchObservedRunningTime="2026-01-14 23:44:44.53374461 +0000 UTC m=+51.531272717" Jan 14 23:44:44.567000 audit[4790]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.567000 audit[4790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe42abb40 a2=0 a3=1 items=0 ppid=2990 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.567000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.572000 audit[4790]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=4790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.572000 audit[4790]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe42abb40 a2=0 a3=1 items=0 ppid=2990 pid=4790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.572000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.701576 systemd-networkd[1503]: calie2aa41ff40d: Gained IPv6LL Jan 14 23:44:44.703449 systemd-networkd[1503]: cali6b48f2b7e12: Gained IPv6LL Jan 14 23:44:44.769671 containerd[1618]: time="2026-01-14T23:44:44.769460396Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:44.771516 containerd[1618]: time="2026-01-14T23:44:44.771343893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:44:44.771593 containerd[1618]: time="2026-01-14T23:44:44.771503258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:44.772098 kubelet[2853]: E0114 23:44:44.771868 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:44.772166 kubelet[2853]: E0114 23:44:44.772117 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:44.772358 kubelet[2853]: E0114 23:44:44.772331 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:44.772427 kubelet[2853]: E0114 23:44:44.772378 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:44:44.773776 containerd[1618]: time="2026-01-14T23:44:44.773728645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:44:45.085939 systemd-networkd[1503]: cali2516fd8f465: Gained IPv6LL Jan 14 23:44:45.112606 containerd[1618]: time="2026-01-14T23:44:45.112418687Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:45.114823 containerd[1618]: time="2026-01-14T23:44:45.114725236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:44:45.115740 containerd[1618]: time="2026-01-14T23:44:45.115441177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:45.116078 kubelet[2853]: E0114 23:44:45.115632 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:45.117009 kubelet[2853]: E0114 23:44:45.116008 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:45.117199 kubelet[2853]: E0114 23:44:45.117139 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:45.119266 kubelet[2853]: E0114 23:44:45.117336 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:45.447218 kubelet[2853]: E0114 23:44:45.446929 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:44:45.448455 kubelet[2853]: E0114 23:44:45.448300 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:44:45.469911 systemd-networkd[1503]: caliebdda38ded2: Gained IPv6LL Jan 14 23:44:45.589000 audit[4793]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:45.589000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffef885e0 a2=0 a3=1 items=0 ppid=2990 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:45.607000 audit[4793]: NETFILTER_CFG table=nat:141 family=2 entries=56 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:45.607000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffffef885e0 a2=0 a3=1 items=0 ppid=2990 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:50.121028 containerd[1618]: time="2026-01-14T23:44:50.120973422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:44:50.460111 containerd[1618]: time="2026-01-14T23:44:50.459852686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:50.461838 containerd[1618]: time="2026-01-14T23:44:50.461731019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:44:50.462080 containerd[1618]: time="2026-01-14T23:44:50.461961505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:50.462345 kubelet[2853]: E0114 23:44:50.462217 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:50.462345 kubelet[2853]: E0114 23:44:50.462269 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:50.462747 kubelet[2853]: E0114 23:44:50.462349 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:50.464632 containerd[1618]: time="2026-01-14T23:44:50.464584098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:44:50.811266 containerd[1618]: time="2026-01-14T23:44:50.811029134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:50.812947 containerd[1618]: time="2026-01-14T23:44:50.812750702Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:44:50.812947 containerd[1618]: time="2026-01-14T23:44:50.812881226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:50.814026 kubelet[2853]: E0114 23:44:50.813771 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:50.814026 kubelet[2853]: E0114 23:44:50.813840 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:50.814026 kubelet[2853]: E0114 23:44:50.813931 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:50.814026 kubelet[2853]: E0114 23:44:50.813974 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:44:53.124122 containerd[1618]: time="2026-01-14T23:44:53.124084851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:53.456859 containerd[1618]: time="2026-01-14T23:44:53.456345575Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:53.458914 containerd[1618]: time="2026-01-14T23:44:53.458690199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:53.459769 containerd[1618]: time="2026-01-14T23:44:53.458827123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:53.460022 kubelet[2853]: E0114 23:44:53.459985 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:53.461412 kubelet[2853]: E0114 23:44:53.460781 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:53.461412 kubelet[2853]: E0114 23:44:53.460903 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:53.461412 kubelet[2853]: E0114 23:44:53.460938 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:44:55.120836 containerd[1618]: time="2026-01-14T23:44:55.120228213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:44:55.446366 containerd[1618]: time="2026-01-14T23:44:55.446078491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:55.447873 containerd[1618]: time="2026-01-14T23:44:55.447702695Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:44:55.447873 containerd[1618]: time="2026-01-14T23:44:55.447713855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:55.448297 kubelet[2853]: E0114 23:44:55.448218 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:55.448297 kubelet[2853]: E0114 23:44:55.448274 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:55.448770 kubelet[2853]: E0114 23:44:55.448348 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:55.448770 kubelet[2853]: E0114 23:44:55.448382 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:44:56.119519 containerd[1618]: time="2026-01-14T23:44:56.119432560Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:56.466935 containerd[1618]: time="2026-01-14T23:44:56.466748816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:56.468704 containerd[1618]: time="2026-01-14T23:44:56.468609626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:56.468851 containerd[1618]: time="2026-01-14T23:44:56.468749629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:56.469057 kubelet[2853]: E0114 23:44:56.469019 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:56.470055 kubelet[2853]: E0114 23:44:56.469443 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:56.470055 kubelet[2853]: E0114 23:44:56.469544 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:56.470055 kubelet[2853]: E0114 23:44:56.469586 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:44:57.121880 containerd[1618]: time="2026-01-14T23:44:57.121507732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:44:57.458996 containerd[1618]: time="2026-01-14T23:44:57.458790288Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:57.462231 containerd[1618]: time="2026-01-14T23:44:57.461568389Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:44:57.462231 containerd[1618]: time="2026-01-14T23:44:57.461755033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:57.462504 kubelet[2853]: E0114 23:44:57.462131 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:57.462504 kubelet[2853]: E0114 23:44:57.462198 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:57.462504 kubelet[2853]: E0114 23:44:57.462306 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:57.462504 kubelet[2853]: E0114 23:44:57.462356 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:44:59.119483 containerd[1618]: time="2026-01-14T23:44:59.119011359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:44:59.462390 containerd[1618]: time="2026-01-14T23:44:59.462228795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:59.464534 containerd[1618]: time="2026-01-14T23:44:59.464343246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:44:59.464534 containerd[1618]: time="2026-01-14T23:44:59.464447645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:59.465097 kubelet[2853]: E0114 23:44:59.464948 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:59.465097 kubelet[2853]: E0114 23:44:59.465089 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:59.466321 kubelet[2853]: E0114 23:44:59.465697 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:59.467628 containerd[1618]: time="2026-01-14T23:44:59.467540604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:44:59.798672 containerd[1618]: time="2026-01-14T23:44:59.798488363Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:59.801015 containerd[1618]: time="2026-01-14T23:44:59.800489456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:44:59.801015 containerd[1618]: time="2026-01-14T23:44:59.800672174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:59.801199 kubelet[2853]: E0114 23:44:59.800896 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:59.801199 kubelet[2853]: E0114 23:44:59.800958 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:59.801199 kubelet[2853]: E0114 23:44:59.801060 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:59.801199 kubelet[2853]: E0114 23:44:59.801122 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:45:06.122097 kubelet[2853]: E0114 23:45:06.122005 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:45:07.122116 kubelet[2853]: E0114 23:45:07.122016 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:45:08.120238 kubelet[2853]: E0114 23:45:08.120190 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:45:10.117600 kubelet[2853]: E0114 23:45:10.117502 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:45:11.118379 kubelet[2853]: E0114 23:45:11.118306 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:45:12.123777 kubelet[2853]: E0114 23:45:12.123722 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:45:19.124438 containerd[1618]: time="2026-01-14T23:45:19.123007202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:45:19.466939 containerd[1618]: time="2026-01-14T23:45:19.466341898Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:19.469431 containerd[1618]: time="2026-01-14T23:45:19.468273302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:45:19.469669 containerd[1618]: time="2026-01-14T23:45:19.468379982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:19.469870 kubelet[2853]: E0114 23:45:19.469808 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:19.470919 kubelet[2853]: E0114 23:45:19.469881 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:19.470919 kubelet[2853]: E0114 23:45:19.470264 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:19.470919 kubelet[2853]: E0114 23:45:19.470312 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:45:19.471018 containerd[1618]: time="2026-01-14T23:45:19.470453306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:45:19.828564 containerd[1618]: time="2026-01-14T23:45:19.828122550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:19.830289 containerd[1618]: time="2026-01-14T23:45:19.829938153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:45:19.830289 containerd[1618]: time="2026-01-14T23:45:19.829974193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:19.831594 kubelet[2853]: E0114 23:45:19.830415 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:45:19.831594 kubelet[2853]: E0114 23:45:19.830464 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:45:19.831594 kubelet[2853]: E0114 23:45:19.830579 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:19.832502 containerd[1618]: time="2026-01-14T23:45:19.832469878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:45:20.183976 containerd[1618]: time="2026-01-14T23:45:20.183219846Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:20.186011 containerd[1618]: time="2026-01-14T23:45:20.185803092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:45:20.186011 containerd[1618]: time="2026-01-14T23:45:20.185913853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:20.186187 kubelet[2853]: E0114 23:45:20.186115 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:45:20.186187 kubelet[2853]: E0114 23:45:20.186164 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:45:20.186478 kubelet[2853]: E0114 23:45:20.186267 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:20.186478 kubelet[2853]: E0114 23:45:20.186315 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:45:23.133155 containerd[1618]: time="2026-01-14T23:45:23.133095527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:45:23.488462 containerd[1618]: time="2026-01-14T23:45:23.487709610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:23.490084 containerd[1618]: time="2026-01-14T23:45:23.489867538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:45:23.490212 containerd[1618]: time="2026-01-14T23:45:23.489890379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:23.490422 kubelet[2853]: E0114 23:45:23.490321 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:23.490969 kubelet[2853]: E0114 23:45:23.490415 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:23.491548 containerd[1618]: time="2026-01-14T23:45:23.490970743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:45:23.491673 kubelet[2853]: E0114 23:45:23.491430 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:23.491673 kubelet[2853]: E0114 23:45:23.491496 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:45:23.843030 containerd[1618]: time="2026-01-14T23:45:23.842622814Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:23.844501 containerd[1618]: time="2026-01-14T23:45:23.844375741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:45:23.844682 containerd[1618]: time="2026-01-14T23:45:23.844441382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:23.844908 kubelet[2853]: E0114 23:45:23.844859 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:45:23.845238 kubelet[2853]: E0114 23:45:23.844927 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:45:23.845238 kubelet[2853]: E0114 23:45:23.845047 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:23.845238 kubelet[2853]: E0114 23:45:23.845102 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:45:26.120745 containerd[1618]: time="2026-01-14T23:45:26.120694433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:45:26.463706 containerd[1618]: time="2026-01-14T23:45:26.463231619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:26.465035 containerd[1618]: time="2026-01-14T23:45:26.464892467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:45:26.465035 containerd[1618]: time="2026-01-14T23:45:26.465000908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:26.465281 kubelet[2853]: E0114 23:45:26.465207 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:26.466385 kubelet[2853]: E0114 23:45:26.466208 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:26.466385 kubelet[2853]: E0114 23:45:26.466357 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:26.467239 kubelet[2853]: E0114 23:45:26.466390 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:45:27.126063 containerd[1618]: time="2026-01-14T23:45:27.126003204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:45:27.471482 containerd[1618]: time="2026-01-14T23:45:27.471312832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:27.473799 containerd[1618]: time="2026-01-14T23:45:27.473668766Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:45:27.474193 containerd[1618]: time="2026-01-14T23:45:27.473723366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:27.475588 kubelet[2853]: E0114 23:45:27.475438 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:45:27.475588 kubelet[2853]: E0114 23:45:27.475540 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:45:27.476506 kubelet[2853]: E0114 23:45:27.476196 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:27.477849 containerd[1618]: time="2026-01-14T23:45:27.477798430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:45:27.833757 containerd[1618]: time="2026-01-14T23:45:27.833499358Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:27.835218 containerd[1618]: time="2026-01-14T23:45:27.835001806Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:45:27.835218 containerd[1618]: time="2026-01-14T23:45:27.835139127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:27.835516 kubelet[2853]: E0114 23:45:27.835335 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:45:27.835516 kubelet[2853]: E0114 23:45:27.835383 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:45:27.835516 kubelet[2853]: E0114 23:45:27.835485 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:27.835684 kubelet[2853]: E0114 23:45:27.835530 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:45:30.275426 update_engine[1599]: I20260114 23:45:30.274508 1599 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 23:45:30.275426 update_engine[1599]: I20260114 23:45:30.274561 1599 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 23:45:30.275426 update_engine[1599]: I20260114 23:45:30.274852 1599 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 23:45:30.277114 update_engine[1599]: I20260114 23:45:30.276917 1599 omaha_request_params.cc:62] Current group set to beta Jan 14 23:45:30.278569 update_engine[1599]: I20260114 23:45:30.278139 1599 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 23:45:30.278569 update_engine[1599]: I20260114 23:45:30.278167 1599 update_attempter.cc:643] Scheduling an action processor start. Jan 14 23:45:30.278569 update_engine[1599]: I20260114 23:45:30.278185 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 23:45:30.278569 update_engine[1599]: I20260114 23:45:30.278379 1599 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 23:45:30.278851 update_engine[1599]: I20260114 23:45:30.278822 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 23:45:30.278936 update_engine[1599]: I20260114 23:45:30.278918 1599 omaha_request_action.cc:272] Request: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.278936 update_engine[1599]: Jan 14 23:45:30.281027 update_engine[1599]: I20260114 23:45:30.279271 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:45:30.284061 update_engine[1599]: I20260114 23:45:30.283898 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:45:30.286061 update_engine[1599]: I20260114 23:45:30.285817 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:45:30.286893 update_engine[1599]: E20260114 23:45:30.286772 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:45:30.286996 locksmithd[1660]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 23:45:30.287630 update_engine[1599]: I20260114 23:45:30.287518 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 23:45:34.120686 kubelet[2853]: E0114 23:45:34.120591 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:45:35.121836 kubelet[2853]: E0114 23:45:35.121217 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:45:36.117837 kubelet[2853]: E0114 23:45:36.117733 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:45:38.119484 kubelet[2853]: E0114 23:45:38.119263 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:45:40.120196 kubelet[2853]: E0114 23:45:40.120118 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:45:40.239628 update_engine[1599]: I20260114 23:45:40.239025 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:45:40.239628 update_engine[1599]: I20260114 23:45:40.239135 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:45:40.239628 update_engine[1599]: I20260114 23:45:40.239573 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:45:40.240428 update_engine[1599]: E20260114 23:45:40.240375 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:45:40.240786 update_engine[1599]: I20260114 23:45:40.240713 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 23:45:43.126771 kubelet[2853]: E0114 23:45:43.126678 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:45:46.121738 kubelet[2853]: E0114 23:45:46.121661 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:45:47.119460 kubelet[2853]: E0114 23:45:47.119293 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:45:48.119470 kubelet[2853]: E0114 23:45:48.118998 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:45:49.121346 kubelet[2853]: E0114 23:45:49.121289 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:45:50.238916 update_engine[1599]: I20260114 23:45:50.238838 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:45:50.239356 update_engine[1599]: I20260114 23:45:50.238935 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:45:50.239356 update_engine[1599]: I20260114 23:45:50.239310 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:45:50.239879 update_engine[1599]: E20260114 23:45:50.239830 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:45:50.239954 update_engine[1599]: I20260114 23:45:50.239926 1599 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 23:45:52.118420 kubelet[2853]: E0114 23:45:52.117947 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:45:57.121316 kubelet[2853]: E0114 23:45:57.121245 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:45:59.127424 kubelet[2853]: E0114 23:45:59.125783 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:45:59.127424 kubelet[2853]: E0114 23:45:59.126340 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:46:00.239212 update_engine[1599]: I20260114 23:46:00.239100 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:46:00.239212 update_engine[1599]: I20260114 23:46:00.239199 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:46:00.239956 update_engine[1599]: I20260114 23:46:00.239598 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:46:00.240288 update_engine[1599]: E20260114 23:46:00.240233 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:46:00.240440 update_engine[1599]: I20260114 23:46:00.240314 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 23:46:00.240440 update_engine[1599]: I20260114 23:46:00.240322 1599 omaha_request_action.cc:617] Omaha request response: Jan 14 23:46:00.240440 update_engine[1599]: E20260114 23:46:00.240422 1599 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 23:46:00.240440 update_engine[1599]: I20260114 23:46:00.240440 1599 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240446 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240451 1599 update_attempter.cc:306] Processing Done. Jan 14 23:46:00.240697 update_engine[1599]: E20260114 23:46:00.240468 1599 update_attempter.cc:619] Update failed. Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240473 1599 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240476 1599 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240481 1599 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240568 1599 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240593 1599 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240597 1599 omaha_request_action.cc:272] Request: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240603 1599 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:46:00.240697 update_engine[1599]: I20260114 23:46:00.240621 1599 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:46:00.242110 update_engine[1599]: I20260114 23:46:00.241827 1599 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:46:00.242159 locksmithd[1660]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 23:46:00.242806 update_engine[1599]: E20260114 23:46:00.242210 1599 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242265 1599 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242272 1599 omaha_request_action.cc:617] Omaha request response: Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242278 1599 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242282 1599 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242286 1599 update_attempter.cc:306] Processing Done. Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242291 1599 update_attempter.cc:310] Error event sent. Jan 14 23:46:00.242806 update_engine[1599]: I20260114 23:46:00.242298 1599 update_check_scheduler.cc:74] Next update check in 47m31s Jan 14 23:46:00.243250 locksmithd[1660]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 23:46:01.125792 containerd[1618]: time="2026-01-14T23:46:01.125741453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:46:01.479946 containerd[1618]: time="2026-01-14T23:46:01.479630039Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:01.482429 containerd[1618]: time="2026-01-14T23:46:01.481366344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:46:01.482804 containerd[1618]: time="2026-01-14T23:46:01.481416345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:01.482963 kubelet[2853]: E0114 23:46:01.482912 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:46:01.483327 kubelet[2853]: E0114 23:46:01.482964 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:46:01.483327 kubelet[2853]: E0114 23:46:01.483048 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:01.485860 containerd[1618]: time="2026-01-14T23:46:01.485814609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:46:01.821882 containerd[1618]: time="2026-01-14T23:46:01.821757373Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:01.823999 containerd[1618]: time="2026-01-14T23:46:01.823756642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:46:01.824446 containerd[1618]: time="2026-01-14T23:46:01.823817803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:01.825980 kubelet[2853]: E0114 23:46:01.824323 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:46:01.825980 kubelet[2853]: E0114 23:46:01.824377 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:46:01.825980 kubelet[2853]: E0114 23:46:01.824631 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:01.825980 kubelet[2853]: E0114 23:46:01.824688 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:46:03.122270 kubelet[2853]: E0114 23:46:03.122216 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:46:04.119073 kubelet[2853]: E0114 23:46:04.118896 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:46:09.122135 containerd[1618]: time="2026-01-14T23:46:09.121901195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:46:09.477199 containerd[1618]: time="2026-01-14T23:46:09.476725674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:09.478408 containerd[1618]: time="2026-01-14T23:46:09.478310179Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:46:09.478554 containerd[1618]: time="2026-01-14T23:46:09.478389700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:09.478826 kubelet[2853]: E0114 23:46:09.478786 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:46:09.479121 kubelet[2853]: E0114 23:46:09.478836 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:46:09.479487 kubelet[2853]: E0114 23:46:09.479452 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:09.482268 containerd[1618]: time="2026-01-14T23:46:09.482222280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:46:09.797000 containerd[1618]: time="2026-01-14T23:46:09.796826613Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:09.798959 containerd[1618]: time="2026-01-14T23:46:09.798828644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:46:09.799090 containerd[1618]: time="2026-01-14T23:46:09.798874365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:09.799457 kubelet[2853]: E0114 23:46:09.799373 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:46:09.799606 kubelet[2853]: E0114 23:46:09.799473 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:46:09.799965 kubelet[2853]: E0114 23:46:09.799689 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:09.799965 kubelet[2853]: E0114 23:46:09.799783 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:46:13.125660 containerd[1618]: time="2026-01-14T23:46:13.125613203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:46:13.478697 containerd[1618]: time="2026-01-14T23:46:13.478257915Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:13.480258 containerd[1618]: time="2026-01-14T23:46:13.480116425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:46:13.482433 containerd[1618]: time="2026-01-14T23:46:13.480198226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:13.482557 kubelet[2853]: E0114 23:46:13.480829 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:13.482557 kubelet[2853]: E0114 23:46:13.480879 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:13.482557 kubelet[2853]: E0114 23:46:13.480958 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-8sg5s_calico-apiserver(1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:13.482557 kubelet[2853]: E0114 23:46:13.480989 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:46:14.118043 containerd[1618]: time="2026-01-14T23:46:14.117933623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:46:14.477248 containerd[1618]: time="2026-01-14T23:46:14.476735068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:14.479505 containerd[1618]: time="2026-01-14T23:46:14.479384911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:46:14.479824 containerd[1618]: time="2026-01-14T23:46:14.479416472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:14.480093 kubelet[2853]: E0114 23:46:14.480046 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:46:14.480367 kubelet[2853]: E0114 23:46:14.480282 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:46:14.480758 kubelet[2853]: E0114 23:46:14.480731 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mbwp8_calico-system(375f636a-b14c-4107-87ee-0c0815e9a9c0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:14.480912 kubelet[2853]: E0114 23:46:14.480875 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:46:15.121847 kubelet[2853]: E0114 23:46:15.121748 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:46:17.121421 containerd[1618]: time="2026-01-14T23:46:17.121015987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:46:17.496762 containerd[1618]: time="2026-01-14T23:46:17.496215838Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:17.498900 containerd[1618]: time="2026-01-14T23:46:17.498752319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:46:17.498900 containerd[1618]: time="2026-01-14T23:46:17.498772760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:17.499231 kubelet[2853]: E0114 23:46:17.499197 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:17.500326 kubelet[2853]: E0114 23:46:17.499675 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:17.500326 kubelet[2853]: E0114 23:46:17.499895 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-84c479b4c5-75mhg_calico-apiserver(7c662993-1a64-424b-835f-c3688665f281): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:17.500326 kubelet[2853]: E0114 23:46:17.499932 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:46:18.121512 containerd[1618]: time="2026-01-14T23:46:18.121354423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:46:18.467594 containerd[1618]: time="2026-01-14T23:46:18.467361026Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:18.469368 containerd[1618]: time="2026-01-14T23:46:18.469289817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:46:18.469820 containerd[1618]: time="2026-01-14T23:46:18.469473940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:18.469878 kubelet[2853]: E0114 23:46:18.469787 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:46:18.469878 kubelet[2853]: E0114 23:46:18.469846 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:46:18.469995 kubelet[2853]: E0114 23:46:18.469941 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7bfc4b59c4-h9rl4_calico-system(0595d5fe-8d8f-4e95-8e85-0c22f59bd781): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:18.470039 kubelet[2853]: E0114 23:46:18.469987 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:46:24.119716 kubelet[2853]: E0114 23:46:24.119647 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:46:25.124237 kubelet[2853]: E0114 23:46:25.124189 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:46:25.216909 systemd[1]: Started sshd@7-46.224.65.210:22-68.220.241.50:32840.service - OpenSSH per-connection server daemon (68.220.241.50:32840). Jan 14 23:46:25.220775 kernel: kauditd_printk_skb: 214 callbacks suppressed Jan 14 23:46:25.220892 kernel: audit: type=1130 audit(1768434385.217:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-46.224.65.210:22-68.220.241.50:32840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:25.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-46.224.65.210:22-68.220.241.50:32840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:25.796571 sshd[4951]: Accepted publickey for core from 68.220.241.50 port 32840 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:25.796000 audit[4951]: USER_ACCT pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.805226 kernel: audit: type=1101 audit(1768434385.796:750): pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.805348 kernel: audit: type=1103 audit(1768434385.801:751): pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.801000 audit[4951]: CRED_ACQ pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.805879 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:25.810408 kernel: audit: type=1006 audit(1768434385.801:752): pid=4951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 14 23:46:25.813925 kernel: audit: type=1300 audit(1768434385.801:752): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdfddb50 a2=3 a3=0 items=0 ppid=1 pid=4951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:25.801000 audit[4951]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffdfddb50 a2=3 a3=0 items=0 ppid=1 pid=4951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:25.821546 kernel: audit: type=1327 audit(1768434385.801:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:25.801000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:25.818225 systemd-logind[1597]: New session 8 of user core. Jan 14 23:46:25.822137 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 23:46:25.827000 audit[4951]: USER_START pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.831000 audit[4954]: CRED_ACQ pid=4954 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.834524 kernel: audit: type=1105 audit(1768434385.827:753): pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:25.834622 kernel: audit: type=1103 audit(1768434385.831:754): pid=4954 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:26.215365 sshd[4954]: Connection closed by 68.220.241.50 port 32840 Jan 14 23:46:26.215712 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:26.216000 audit[4951]: USER_END pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:26.228996 kernel: audit: type=1106 audit(1768434386.216:755): pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:26.229099 kernel: audit: type=1104 audit(1768434386.216:756): pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:26.216000 audit[4951]: CRED_DISP pid=4951 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:26.225859 systemd[1]: sshd@7-46.224.65.210:22-68.220.241.50:32840.service: Deactivated successfully. Jan 14 23:46:26.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-46.224.65.210:22-68.220.241.50:32840 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:26.232594 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 23:46:26.237499 systemd-logind[1597]: Session 8 logged out. Waiting for processes to exit. Jan 14 23:46:26.239613 systemd-logind[1597]: Removed session 8. Jan 14 23:46:27.120093 kubelet[2853]: E0114 23:46:27.119998 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:46:28.123844 kubelet[2853]: E0114 23:46:28.123775 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:46:31.120427 kubelet[2853]: E0114 23:46:31.119232 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:46:31.124262 kubelet[2853]: E0114 23:46:31.122289 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:46:31.329645 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:31.329741 kernel: audit: type=1130 audit(1768434391.325:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-46.224.65.210:22-68.220.241.50:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:31.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-46.224.65.210:22-68.220.241.50:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:31.327004 systemd[1]: Started sshd@8-46.224.65.210:22-68.220.241.50:32854.service - OpenSSH per-connection server daemon (68.220.241.50:32854). Jan 14 23:46:31.866000 audit[4970]: USER_ACCT pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.870624 sshd[4970]: Accepted publickey for core from 68.220.241.50 port 32854 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:31.871977 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:31.869000 audit[4970]: CRED_ACQ pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.874962 kernel: audit: type=1101 audit(1768434391.866:759): pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.875058 kernel: audit: type=1103 audit(1768434391.869:760): pid=4970 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.876639 kernel: audit: type=1006 audit(1768434391.870:761): pid=4970 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 23:46:31.870000 audit[4970]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca5d7290 a2=3 a3=0 items=0 ppid=1 pid=4970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:31.879890 kernel: audit: type=1300 audit(1768434391.870:761): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffca5d7290 a2=3 a3=0 items=0 ppid=1 pid=4970 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:31.870000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:31.880919 kernel: audit: type=1327 audit(1768434391.870:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:31.887171 systemd-logind[1597]: New session 9 of user core. Jan 14 23:46:31.890635 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 23:46:31.895000 audit[4970]: USER_START pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.899000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.903257 kernel: audit: type=1105 audit(1768434391.895:762): pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:31.903382 kernel: audit: type=1103 audit(1768434391.899:763): pid=4973 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:32.278070 sshd[4973]: Connection closed by 68.220.241.50 port 32854 Jan 14 23:46:32.278635 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:32.280000 audit[4970]: USER_END pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:32.285165 systemd-logind[1597]: Session 9 logged out. Waiting for processes to exit. Jan 14 23:46:32.280000 audit[4970]: CRED_DISP pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:32.288002 systemd[1]: sshd@8-46.224.65.210:22-68.220.241.50:32854.service: Deactivated successfully. Jan 14 23:46:32.288783 kernel: audit: type=1106 audit(1768434392.280:764): pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:32.288924 kernel: audit: type=1104 audit(1768434392.280:765): pid=4970 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:32.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-46.224.65.210:22-68.220.241.50:32854 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:32.294951 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 23:46:32.302008 systemd-logind[1597]: Removed session 9. Jan 14 23:46:36.119292 kubelet[2853]: E0114 23:46:36.119236 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:46:37.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-46.224.65.210:22-68.220.241.50:51198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:37.382442 systemd[1]: Started sshd@9-46.224.65.210:22-68.220.241.50:51198.service - OpenSSH per-connection server daemon (68.220.241.50:51198). Jan 14 23:46:37.385137 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:37.385225 kernel: audit: type=1130 audit(1768434397.381:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-46.224.65.210:22-68.220.241.50:51198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:37.915000 audit[4985]: USER_ACCT pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.919589 sshd[4985]: Accepted publickey for core from 68.220.241.50 port 51198 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:37.921193 sshd-session[4985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:37.925300 kernel: audit: type=1101 audit(1768434397.915:768): pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.925343 kernel: audit: type=1103 audit(1768434397.919:769): pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.919000 audit[4985]: CRED_ACQ pid=4985 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.928346 kernel: audit: type=1006 audit(1768434397.919:770): pid=4985 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 23:46:37.931167 kernel: audit: type=1300 audit(1768434397.919:770): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd49a57b0 a2=3 a3=0 items=0 ppid=1 pid=4985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:37.919000 audit[4985]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd49a57b0 a2=3 a3=0 items=0 ppid=1 pid=4985 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:37.919000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:37.932519 kernel: audit: type=1327 audit(1768434397.919:770): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:37.937573 systemd-logind[1597]: New session 10 of user core. Jan 14 23:46:37.941738 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 23:46:37.947000 audit[4985]: USER_START pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.951452 kernel: audit: type=1105 audit(1768434397.947:771): pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.950000 audit[4988]: CRED_ACQ pid=4988 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:37.954507 kernel: audit: type=1103 audit(1768434397.950:772): pid=4988 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:38.297958 sshd[4988]: Connection closed by 68.220.241.50 port 51198 Jan 14 23:46:38.300714 sshd-session[4985]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:38.302000 audit[4985]: USER_END pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:38.309977 kernel: audit: type=1106 audit(1768434398.302:773): pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:38.310113 kernel: audit: type=1104 audit(1768434398.302:774): pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:38.302000 audit[4985]: CRED_DISP pid=4985 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:38.307833 systemd[1]: sshd@9-46.224.65.210:22-68.220.241.50:51198.service: Deactivated successfully. Jan 14 23:46:38.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-46.224.65.210:22-68.220.241.50:51198 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:38.313642 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 23:46:38.320284 systemd-logind[1597]: Session 10 logged out. Waiting for processes to exit. Jan 14 23:46:38.323625 systemd-logind[1597]: Removed session 10. Jan 14 23:46:38.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-46.224.65.210:22-68.220.241.50:51210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:38.422813 systemd[1]: Started sshd@10-46.224.65.210:22-68.220.241.50:51210.service - OpenSSH per-connection server daemon (68.220.241.50:51210). Jan 14 23:46:39.008000 audit[5017]: USER_ACCT pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.010844 sshd[5017]: Accepted publickey for core from 68.220.241.50 port 51210 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:39.011000 audit[5017]: CRED_ACQ pid=5017 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.011000 audit[5017]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7417170 a2=3 a3=0 items=0 ppid=1 pid=5017 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:39.011000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:39.014112 sshd-session[5017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:39.022084 systemd-logind[1597]: New session 11 of user core. Jan 14 23:46:39.029879 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 23:46:39.032000 audit[5017]: USER_START pid=5017 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.034000 audit[5027]: CRED_ACQ pid=5027 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.146804 kubelet[2853]: E0114 23:46:39.145975 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:46:39.150292 kubelet[2853]: E0114 23:46:39.150084 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:46:39.469602 sshd[5027]: Connection closed by 68.220.241.50 port 51210 Jan 14 23:46:39.470057 sshd-session[5017]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:39.471000 audit[5017]: USER_END pid=5017 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.472000 audit[5017]: CRED_DISP pid=5017 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:39.478936 systemd-logind[1597]: Session 11 logged out. Waiting for processes to exit. Jan 14 23:46:39.479646 systemd[1]: sshd@10-46.224.65.210:22-68.220.241.50:51210.service: Deactivated successfully. Jan 14 23:46:39.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-46.224.65.210:22-68.220.241.50:51210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:39.485138 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 23:46:39.492223 systemd-logind[1597]: Removed session 11. Jan 14 23:46:39.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-46.224.65.210:22-68.220.241.50:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:39.581723 systemd[1]: Started sshd@11-46.224.65.210:22-68.220.241.50:51212.service - OpenSSH per-connection server daemon (68.220.241.50:51212). Jan 14 23:46:40.139000 audit[5037]: USER_ACCT pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.141301 sshd[5037]: Accepted publickey for core from 68.220.241.50 port 51212 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:40.141000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.141000 audit[5037]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe917b20 a2=3 a3=0 items=0 ppid=1 pid=5037 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:40.141000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:40.143803 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:40.151281 systemd-logind[1597]: New session 12 of user core. Jan 14 23:46:40.156648 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 23:46:40.159000 audit[5037]: USER_START pid=5037 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.162000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.561054 sshd[5040]: Connection closed by 68.220.241.50 port 51212 Jan 14 23:46:40.564639 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:40.564000 audit[5037]: USER_END pid=5037 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.564000 audit[5037]: CRED_DISP pid=5037 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-46.224.65.210:22-68.220.241.50:51212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:40.571018 systemd[1]: sshd@11-46.224.65.210:22-68.220.241.50:51212.service: Deactivated successfully. Jan 14 23:46:40.576569 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 23:46:40.579636 systemd-logind[1597]: Session 12 logged out. Waiting for processes to exit. Jan 14 23:46:40.585076 systemd-logind[1597]: Removed session 12. Jan 14 23:46:42.122295 kubelet[2853]: E0114 23:46:42.122244 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:46:42.123741 kubelet[2853]: E0114 23:46:42.122723 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:46:45.121275 kubelet[2853]: E0114 23:46:45.120719 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:46:45.673656 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 23:46:45.674241 kernel: audit: type=1130 audit(1768434405.669:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-46.224.65.210:22-68.220.241.50:56998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:45.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-46.224.65.210:22-68.220.241.50:56998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:45.670752 systemd[1]: Started sshd@12-46.224.65.210:22-68.220.241.50:56998.service - OpenSSH per-connection server daemon (68.220.241.50:56998). Jan 14 23:46:46.212000 audit[5057]: USER_ACCT pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.215610 sshd[5057]: Accepted publickey for core from 68.220.241.50 port 56998 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:46.216415 kernel: audit: type=1101 audit(1768434406.212:795): pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.218028 sshd-session[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:46.216000 audit[5057]: CRED_ACQ pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.222179 kernel: audit: type=1103 audit(1768434406.216:796): pid=5057 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.222276 kernel: audit: type=1006 audit(1768434406.216:797): pid=5057 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 23:46:46.216000 audit[5057]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbb634f0 a2=3 a3=0 items=0 ppid=1 pid=5057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:46.225519 kernel: audit: type=1300 audit(1768434406.216:797): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbb634f0 a2=3 a3=0 items=0 ppid=1 pid=5057 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:46.216000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:46.227867 kernel: audit: type=1327 audit(1768434406.216:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:46.231241 systemd-logind[1597]: New session 13 of user core. Jan 14 23:46:46.240695 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 23:46:46.245000 audit[5057]: USER_START pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.251420 kernel: audit: type=1105 audit(1768434406.245:798): pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.250000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.255457 kernel: audit: type=1103 audit(1768434406.250:799): pid=5060 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.622099 sshd[5060]: Connection closed by 68.220.241.50 port 56998 Jan 14 23:46:46.625671 sshd-session[5057]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:46.625000 audit[5057]: USER_END pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.625000 audit[5057]: CRED_DISP pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.633467 kernel: audit: type=1106 audit(1768434406.625:800): pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.633563 kernel: audit: type=1104 audit(1768434406.625:801): pid=5057 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:46.635888 systemd-logind[1597]: Session 13 logged out. Waiting for processes to exit. Jan 14 23:46:46.636925 systemd[1]: sshd@12-46.224.65.210:22-68.220.241.50:56998.service: Deactivated successfully. Jan 14 23:46:46.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-46.224.65.210:22-68.220.241.50:56998 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:46.643311 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 23:46:46.646894 systemd-logind[1597]: Removed session 13. Jan 14 23:46:49.123017 kubelet[2853]: E0114 23:46:49.122833 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:46:50.120688 kubelet[2853]: E0114 23:46:50.120637 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:46:50.121652 kubelet[2853]: E0114 23:46:50.121564 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:46:51.734063 systemd[1]: Started sshd@13-46.224.65.210:22-68.220.241.50:57006.service - OpenSSH per-connection server daemon (68.220.241.50:57006). Jan 14 23:46:51.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-46.224.65.210:22-68.220.241.50:57006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:51.737379 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:51.737493 kernel: audit: type=1130 audit(1768434411.732:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-46.224.65.210:22-68.220.241.50:57006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:52.266000 audit[5072]: USER_ACCT pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.270029 sshd[5072]: Accepted publickey for core from 68.220.241.50 port 57006 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:52.276060 kernel: audit: type=1101 audit(1768434412.266:804): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.276239 kernel: audit: type=1103 audit(1768434412.270:805): pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.270000 audit[5072]: CRED_ACQ pid=5072 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.276265 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:52.280803 kernel: audit: type=1006 audit(1768434412.270:806): pid=5072 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 23:46:52.270000 audit[5072]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff41c1920 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.284387 kernel: audit: type=1300 audit(1768434412.270:806): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff41c1920 a2=3 a3=0 items=0 ppid=1 pid=5072 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.270000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:52.287465 kernel: audit: type=1327 audit(1768434412.270:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:52.293782 systemd-logind[1597]: New session 14 of user core. Jan 14 23:46:52.297697 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 23:46:52.301000 audit[5072]: USER_START pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.306000 audit[5075]: CRED_ACQ pid=5075 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.310271 kernel: audit: type=1105 audit(1768434412.301:807): pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.310389 kernel: audit: type=1103 audit(1768434412.306:808): pid=5075 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.708755 sshd[5075]: Connection closed by 68.220.241.50 port 57006 Jan 14 23:46:52.709143 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:52.709000 audit[5072]: USER_END pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.710000 audit[5072]: CRED_DISP pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.719046 kernel: audit: type=1106 audit(1768434412.709:809): pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.719171 kernel: audit: type=1104 audit(1768434412.710:810): pid=5072 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.717625 systemd-logind[1597]: Session 14 logged out. Waiting for processes to exit. Jan 14 23:46:52.718686 systemd[1]: sshd@13-46.224.65.210:22-68.220.241.50:57006.service: Deactivated successfully. Jan 14 23:46:52.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-46.224.65.210:22-68.220.241.50:57006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:52.724289 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 23:46:52.729693 systemd-logind[1597]: Removed session 14. Jan 14 23:46:54.118486 kubelet[2853]: E0114 23:46:54.118428 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:46:55.119454 kubelet[2853]: E0114 23:46:55.118390 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:46:56.119070 kubelet[2853]: E0114 23:46:56.119003 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:46:57.834927 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:57.835066 kernel: audit: type=1130 audit(1768434417.832:812): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-46.224.65.210:22-68.220.241.50:53196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:57.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-46.224.65.210:22-68.220.241.50:53196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:57.833885 systemd[1]: Started sshd@14-46.224.65.210:22-68.220.241.50:53196.service - OpenSSH per-connection server daemon (68.220.241.50:53196). Jan 14 23:46:58.405000 audit[5089]: USER_ACCT pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.407769 sshd[5089]: Accepted publickey for core from 68.220.241.50 port 53196 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:58.410000 audit[5089]: CRED_ACQ pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.412691 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:58.414641 kernel: audit: type=1101 audit(1768434418.405:813): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.415579 kernel: audit: type=1103 audit(1768434418.410:814): pid=5089 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.415873 kernel: audit: type=1006 audit(1768434418.410:815): pid=5089 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 14 23:46:58.410000 audit[5089]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcae1f480 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:58.419338 kernel: audit: type=1300 audit(1768434418.410:815): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcae1f480 a2=3 a3=0 items=0 ppid=1 pid=5089 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:58.419754 kernel: audit: type=1327 audit(1768434418.410:815): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:58.410000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:58.423726 systemd-logind[1597]: New session 15 of user core. Jan 14 23:46:58.431677 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 23:46:58.436000 audit[5089]: USER_START pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.441456 kernel: audit: type=1105 audit(1768434418.436:816): pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.440000 audit[5092]: CRED_ACQ pid=5092 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.445508 kernel: audit: type=1103 audit(1768434418.440:817): pid=5092 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.821707 sshd[5092]: Connection closed by 68.220.241.50 port 53196 Jan 14 23:46:58.824346 sshd-session[5089]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:58.825000 audit[5089]: USER_END pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.828000 audit[5089]: CRED_DISP pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.832842 kernel: audit: type=1106 audit(1768434418.825:818): pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.832973 kernel: audit: type=1104 audit(1768434418.828:819): pid=5089 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:58.833605 systemd[1]: sshd@14-46.224.65.210:22-68.220.241.50:53196.service: Deactivated successfully. Jan 14 23:46:58.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-46.224.65.210:22-68.220.241.50:53196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:58.836569 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 23:46:58.838097 systemd-logind[1597]: Session 15 logged out. Waiting for processes to exit. Jan 14 23:46:58.842489 systemd-logind[1597]: Removed session 15. Jan 14 23:46:58.933887 systemd[1]: Started sshd@15-46.224.65.210:22-68.220.241.50:53210.service - OpenSSH per-connection server daemon (68.220.241.50:53210). Jan 14 23:46:58.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-46.224.65.210:22-68.220.241.50:53210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:59.495000 audit[5104]: USER_ACCT pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:59.498606 sshd[5104]: Accepted publickey for core from 68.220.241.50 port 53210 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:59.499000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:59.500000 audit[5104]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff34886d0 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:59.500000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:59.502146 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:59.510629 systemd-logind[1597]: New session 16 of user core. Jan 14 23:46:59.517847 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 23:46:59.521000 audit[5104]: USER_START pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:59.527000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.075484 sshd[5107]: Connection closed by 68.220.241.50 port 53210 Jan 14 23:47:00.077643 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:00.080000 audit[5104]: USER_END pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.081000 audit[5104]: CRED_DISP pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.086073 systemd-logind[1597]: Session 16 logged out. Waiting for processes to exit. Jan 14 23:47:00.089344 systemd[1]: sshd@15-46.224.65.210:22-68.220.241.50:53210.service: Deactivated successfully. Jan 14 23:47:00.088000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-46.224.65.210:22-68.220.241.50:53210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:00.094013 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 23:47:00.097935 systemd-logind[1597]: Removed session 16. Jan 14 23:47:00.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-46.224.65.210:22-68.220.241.50:53214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:00.186712 systemd[1]: Started sshd@16-46.224.65.210:22-68.220.241.50:53214.service - OpenSSH per-connection server daemon (68.220.241.50:53214). Jan 14 23:47:00.742000 audit[5117]: USER_ACCT pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.744508 sshd[5117]: Accepted publickey for core from 68.220.241.50 port 53214 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:00.744000 audit[5117]: CRED_ACQ pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.744000 audit[5117]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdfbe90c0 a2=3 a3=0 items=0 ppid=1 pid=5117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:00.744000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:00.745678 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:00.755068 systemd-logind[1597]: New session 17 of user core. Jan 14 23:47:00.759029 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 23:47:00.764000 audit[5117]: USER_START pid=5117 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.767000 audit[5122]: CRED_ACQ pid=5122 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:01.120386 kubelet[2853]: E0114 23:47:01.120318 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:47:01.823000 audit[5133]: NETFILTER_CFG table=filter:142 family=2 entries=26 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:01.823000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdb015e80 a2=0 a3=1 items=0 ppid=2990 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:01.823000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:01.828000 audit[5133]: NETFILTER_CFG table=nat:143 family=2 entries=20 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:01.828000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdb015e80 a2=0 a3=1 items=0 ppid=2990 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:01.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:01.932072 sshd[5122]: Connection closed by 68.220.241.50 port 53214 Jan 14 23:47:01.933931 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:01.937000 audit[5117]: USER_END pid=5117 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:01.938000 audit[5117]: CRED_DISP pid=5117 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:01.943537 systemd[1]: sshd@16-46.224.65.210:22-68.220.241.50:53214.service: Deactivated successfully. Jan 14 23:47:01.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-46.224.65.210:22-68.220.241.50:53214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:01.953629 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 23:47:01.955419 systemd-logind[1597]: Session 17 logged out. Waiting for processes to exit. Jan 14 23:47:01.959226 systemd-logind[1597]: Removed session 17. Jan 14 23:47:02.046964 systemd[1]: Started sshd@17-46.224.65.210:22-68.220.241.50:53220.service - OpenSSH per-connection server daemon (68.220.241.50:53220). Jan 14 23:47:02.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-46.224.65.210:22-68.220.241.50:53220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:02.123673 kubelet[2853]: E0114 23:47:02.123448 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:47:02.603000 audit[5138]: USER_ACCT pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:02.604598 sshd[5138]: Accepted publickey for core from 68.220.241.50 port 53220 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:02.606000 audit[5138]: CRED_ACQ pid=5138 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:02.606000 audit[5138]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff4654ff0 a2=3 a3=0 items=0 ppid=1 pid=5138 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:02.606000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:02.607372 sshd-session[5138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:02.615743 systemd-logind[1597]: New session 18 of user core. Jan 14 23:47:02.619703 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 23:47:02.626000 audit[5138]: USER_START pid=5138 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:02.628000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:02.889997 kernel: kauditd_printk_skb: 37 callbacks suppressed Jan 14 23:47:02.890166 kernel: audit: type=1325 audit(1768434422.887:847): table=filter:144 family=2 entries=38 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:02.887000 audit[5148]: NETFILTER_CFG table=filter:144 family=2 entries=38 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:02.894053 kernel: audit: type=1300 audit(1768434422.887:847): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffddc80600 a2=0 a3=1 items=0 ppid=2990 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:02.887000 audit[5148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffddc80600 a2=0 a3=1 items=0 ppid=2990 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:02.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:02.896757 kernel: audit: type=1327 audit(1768434422.887:847): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:02.899551 kernel: audit: type=1325 audit(1768434422.895:848): table=nat:145 family=2 entries=20 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:02.895000 audit[5148]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:02.895000 audit[5148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffddc80600 a2=0 a3=1 items=0 ppid=2990 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:02.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:02.906359 kernel: audit: type=1300 audit(1768434422.895:848): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffddc80600 a2=0 a3=1 items=0 ppid=2990 pid=5148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:02.906475 kernel: audit: type=1327 audit(1768434422.895:848): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:03.195180 sshd[5141]: Connection closed by 68.220.241.50 port 53220 Jan 14 23:47:03.198659 sshd-session[5138]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:03.200000 audit[5138]: USER_END pid=5138 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.201000 audit[5138]: CRED_DISP pid=5138 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.208500 systemd-logind[1597]: Session 18 logged out. Waiting for processes to exit. Jan 14 23:47:03.211100 kernel: audit: type=1106 audit(1768434423.200:849): pid=5138 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.211214 kernel: audit: type=1104 audit(1768434423.201:850): pid=5138 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.211844 systemd[1]: sshd@17-46.224.65.210:22-68.220.241.50:53220.service: Deactivated successfully. Jan 14 23:47:03.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-46.224.65.210:22-68.220.241.50:53220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:03.217423 kernel: audit: type=1131 audit(1768434423.211:851): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-46.224.65.210:22-68.220.241.50:53220 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:03.217817 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 23:47:03.222875 systemd-logind[1597]: Removed session 18. Jan 14 23:47:03.317854 systemd[1]: Started sshd@18-46.224.65.210:22-68.220.241.50:58506.service - OpenSSH per-connection server daemon (68.220.241.50:58506). Jan 14 23:47:03.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-46.224.65.210:22-68.220.241.50:58506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:03.321515 kernel: audit: type=1130 audit(1768434423.317:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-46.224.65.210:22-68.220.241.50:58506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:03.896000 audit[5153]: USER_ACCT pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.897156 sshd[5153]: Accepted publickey for core from 68.220.241.50 port 58506 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:03.899000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.899000 audit[5153]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa311b60 a2=3 a3=0 items=0 ppid=1 pid=5153 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:03.899000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:03.900069 sshd-session[5153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:03.909267 systemd-logind[1597]: New session 19 of user core. Jan 14 23:47:03.915317 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 23:47:03.922000 audit[5153]: USER_START pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:03.927000 audit[5156]: CRED_ACQ pid=5156 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:04.122110 kubelet[2853]: E0114 23:47:04.122037 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:47:04.348518 sshd[5156]: Connection closed by 68.220.241.50 port 58506 Jan 14 23:47:04.348289 sshd-session[5153]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:04.352000 audit[5153]: USER_END pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:04.352000 audit[5153]: CRED_DISP pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:04.356722 systemd-logind[1597]: Session 19 logged out. Waiting for processes to exit. Jan 14 23:47:04.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-46.224.65.210:22-68.220.241.50:58506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:04.356873 systemd[1]: sshd@18-46.224.65.210:22-68.220.241.50:58506.service: Deactivated successfully. Jan 14 23:47:04.359671 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 23:47:04.363990 systemd-logind[1597]: Removed session 19. Jan 14 23:47:06.941000 audit[5168]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:06.941000 audit[5168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffda1a50f0 a2=0 a3=1 items=0 ppid=2990 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:06.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:06.945000 audit[5168]: NETFILTER_CFG table=nat:147 family=2 entries=104 op=nft_register_chain pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:47:06.945000 audit[5168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffda1a50f0 a2=0 a3=1 items=0 ppid=2990 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:06.945000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:47:07.120428 kubelet[2853]: E0114 23:47:07.120016 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:47:09.121996 kubelet[2853]: E0114 23:47:09.121054 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:47:09.456221 kernel: kauditd_printk_skb: 16 callbacks suppressed Jan 14 23:47:09.456359 kernel: audit: type=1130 audit(1768434429.453:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-46.224.65.210:22-68.220.241.50:58510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:09.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-46.224.65.210:22-68.220.241.50:58510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:09.453733 systemd[1]: Started sshd@19-46.224.65.210:22-68.220.241.50:58510.service - OpenSSH per-connection server daemon (68.220.241.50:58510). Jan 14 23:47:10.010000 audit[5195]: USER_ACCT pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.016015 sshd[5195]: Accepted publickey for core from 68.220.241.50 port 58510 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:10.017485 kernel: audit: type=1101 audit(1768434430.010:864): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.018000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.020857 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:10.023264 kernel: audit: type=1103 audit(1768434430.018:865): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.023331 kernel: audit: type=1006 audit(1768434430.018:866): pid=5195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 23:47:10.018000 audit[5195]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc30c980 a2=3 a3=0 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:10.026455 kernel: audit: type=1300 audit(1768434430.018:866): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc30c980 a2=3 a3=0 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:10.026597 kernel: audit: type=1327 audit(1768434430.018:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:10.018000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:10.033467 systemd-logind[1597]: New session 20 of user core. Jan 14 23:47:10.038761 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 23:47:10.042000 audit[5195]: USER_START pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.048454 kernel: audit: type=1105 audit(1768434430.042:867): pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.048000 audit[5198]: CRED_ACQ pid=5198 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.051475 kernel: audit: type=1103 audit(1768434430.048:868): pid=5198 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.450286 sshd[5198]: Connection closed by 68.220.241.50 port 58510 Jan 14 23:47:10.454112 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:10.457000 audit[5195]: USER_END pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.462107 systemd[1]: sshd@19-46.224.65.210:22-68.220.241.50:58510.service: Deactivated successfully. Jan 14 23:47:10.465560 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 23:47:10.469001 kernel: audit: type=1106 audit(1768434430.457:869): pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.469204 kernel: audit: type=1104 audit(1768434430.457:870): pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.457000 audit[5195]: CRED_DISP pid=5195 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:10.472473 systemd-logind[1597]: Session 20 logged out. Waiting for processes to exit. Jan 14 23:47:10.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-46.224.65.210:22-68.220.241.50:58510 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:10.477107 systemd-logind[1597]: Removed session 20. Jan 14 23:47:11.121434 kubelet[2853]: E0114 23:47:11.121125 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:47:13.123678 kubelet[2853]: E0114 23:47:13.122812 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:47:15.567713 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:47:15.567838 kernel: audit: type=1130 audit(1768434435.564:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-46.224.65.210:22-68.220.241.50:34154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:15.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-46.224.65.210:22-68.220.241.50:34154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:15.565070 systemd[1]: Started sshd@20-46.224.65.210:22-68.220.241.50:34154.service - OpenSSH per-connection server daemon (68.220.241.50:34154). Jan 14 23:47:16.121000 audit[5210]: USER_ACCT pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.124895 sshd[5210]: Accepted publickey for core from 68.220.241.50 port 34154 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:16.128299 kernel: audit: type=1101 audit(1768434436.121:873): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.128421 kernel: audit: type=1103 audit(1768434436.125:874): pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.125000 audit[5210]: CRED_ACQ pid=5210 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.129020 sshd-session[5210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:16.132587 kernel: audit: type=1006 audit(1768434436.128:875): pid=5210 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 23:47:16.128000 audit[5210]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd26889a0 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:16.136966 kernel: audit: type=1300 audit(1768434436.128:875): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd26889a0 a2=3 a3=0 items=0 ppid=1 pid=5210 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:16.137100 kernel: audit: type=1327 audit(1768434436.128:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:16.128000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:16.144998 systemd-logind[1597]: New session 21 of user core. Jan 14 23:47:16.146709 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 23:47:16.153000 audit[5210]: USER_START pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.158453 kernel: audit: type=1105 audit(1768434436.153:876): pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.159000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.163440 kernel: audit: type=1103 audit(1768434436.159:877): pid=5213 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.529626 sshd[5213]: Connection closed by 68.220.241.50 port 34154 Jan 14 23:47:16.530767 sshd-session[5210]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:16.532000 audit[5210]: USER_END pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.532000 audit[5210]: CRED_DISP pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.541318 kernel: audit: type=1106 audit(1768434436.532:878): pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.541444 kernel: audit: type=1104 audit(1768434436.532:879): pid=5210 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:16.541714 systemd[1]: sshd@20-46.224.65.210:22-68.220.241.50:34154.service: Deactivated successfully. Jan 14 23:47:16.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-46.224.65.210:22-68.220.241.50:34154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:16.544890 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 23:47:16.549232 systemd-logind[1597]: Session 21 logged out. Waiting for processes to exit. Jan 14 23:47:16.552293 systemd-logind[1597]: Removed session 21. Jan 14 23:47:17.124122 kubelet[2853]: E0114 23:47:17.123966 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:47:19.121769 kubelet[2853]: E0114 23:47:19.121724 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:47:20.118124 kubelet[2853]: E0114 23:47:20.118067 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:47:23.120443 kubelet[2853]: E0114 23:47:23.119738 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:47:23.123042 kubelet[2853]: E0114 23:47:23.122602 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:47:28.117613 containerd[1618]: time="2026-01-14T23:47:28.117537328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:47:28.460778 containerd[1618]: time="2026-01-14T23:47:28.459962450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:47:28.461783 containerd[1618]: time="2026-01-14T23:47:28.461576256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:47:28.463222 kubelet[2853]: E0114 23:47:28.462813 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:47:28.463222 kubelet[2853]: E0114 23:47:28.462887 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:47:28.465004 containerd[1618]: time="2026-01-14T23:47:28.461665256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:47:28.465927 kubelet[2853]: E0114 23:47:28.463931 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:47:28.466042 containerd[1618]: time="2026-01-14T23:47:28.465494629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:47:28.798179 containerd[1618]: time="2026-01-14T23:47:28.797700957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:47:28.799617 containerd[1618]: time="2026-01-14T23:47:28.799563643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:47:28.799704 containerd[1618]: time="2026-01-14T23:47:28.799666683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:47:28.799910 kubelet[2853]: E0114 23:47:28.799862 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:47:28.799980 kubelet[2853]: E0114 23:47:28.799913 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:47:28.800078 kubelet[2853]: E0114 23:47:28.800027 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-74f68b674d-szjw2_calico-system(37539e9d-a75a-4f02-a310-797e90f63b91): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:47:28.800157 kubelet[2853]: E0114 23:47:28.800132 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-74f68b674d-szjw2" podUID="37539e9d-a75a-4f02-a310-797e90f63b91" Jan 14 23:47:30.772291 systemd[1]: cri-containerd-a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789.scope: Deactivated successfully. Jan 14 23:47:30.773688 systemd[1]: cri-containerd-a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789.scope: Consumed 36.589s CPU time, 106.1M memory peak. Jan 14 23:47:30.775000 audit: BPF prog-id=146 op=UNLOAD Jan 14 23:47:30.776650 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:47:30.776713 kernel: audit: type=1334 audit(1768434450.775:881): prog-id=146 op=UNLOAD Jan 14 23:47:30.776000 audit: BPF prog-id=150 op=UNLOAD Jan 14 23:47:30.778332 containerd[1618]: time="2026-01-14T23:47:30.777372725Z" level=info msg="received container exit event container_id:\"a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789\" id:\"a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789\" pid:3179 exit_status:1 exited_at:{seconds:1768434450 nanos:776896723}" Jan 14 23:47:30.778856 kernel: audit: type=1334 audit(1768434450.776:882): prog-id=150 op=UNLOAD Jan 14 23:47:30.804653 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789-rootfs.mount: Deactivated successfully. Jan 14 23:47:31.023504 kubelet[2853]: I0114 23:47:31.023076 2853 scope.go:117] "RemoveContainer" containerID="a3f9cf35734cff7af169edc33bccb1f6366f40358e329d3b5126f474c8898789" Jan 14 23:47:31.028296 containerd[1618]: time="2026-01-14T23:47:31.028064605Z" level=info msg="CreateContainer within sandbox \"09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 23:47:31.049711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2482206159.mount: Deactivated successfully. Jan 14 23:47:31.050932 containerd[1618]: time="2026-01-14T23:47:31.050706416Z" level=info msg="Container b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:31.059043 containerd[1618]: time="2026-01-14T23:47:31.058990369Z" level=info msg="CreateContainer within sandbox \"09b61bdb354e72be7ef867b6b29c4aba94ba63388f60b94be7d48d4d02ac0996\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6\"" Jan 14 23:47:31.060033 containerd[1618]: time="2026-01-14T23:47:31.059966533Z" level=info msg="StartContainer for \"b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6\"" Jan 14 23:47:31.062168 containerd[1618]: time="2026-01-14T23:47:31.062117702Z" level=info msg="connecting to shim b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6" address="unix:///run/containerd/s/5b43a7691ca9170cb4cbef392dfb23cb3c990333593b22a7da13cf42eee60baa" protocol=ttrpc version=3 Jan 14 23:47:31.091719 systemd[1]: Started cri-containerd-b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6.scope - libcontainer container b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6. Jan 14 23:47:31.105171 kubelet[2853]: E0114 23:47:31.105094 2853 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56062->10.0.0.2:2379: read: connection timed out" Jan 14 23:47:31.112000 audit: BPF prog-id=256 op=LOAD Jan 14 23:47:31.114433 kernel: audit: type=1334 audit(1768434451.112:883): prog-id=256 op=LOAD Jan 14 23:47:31.114000 audit: BPF prog-id=257 op=LOAD Jan 14 23:47:31.117991 kernel: audit: type=1334 audit(1768434451.114:884): prog-id=257 op=LOAD Jan 14 23:47:31.118106 kernel: audit: type=1300 audit(1768434451.114:884): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.114000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.121692 kernel: audit: type=1327 audit(1768434451.114:884): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.114000 audit: BPF prog-id=257 op=UNLOAD Jan 14 23:47:31.114000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.125577 kernel: audit: type=1334 audit(1768434451.114:885): prog-id=257 op=UNLOAD Jan 14 23:47:31.125774 kernel: audit: type=1300 audit(1768434451.114:885): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.128250 kernel: audit: type=1327 audit(1768434451.114:885): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.114000 audit: BPF prog-id=258 op=LOAD Jan 14 23:47:31.114000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.129494 kernel: audit: type=1334 audit(1768434451.114:886): prog-id=258 op=LOAD Jan 14 23:47:31.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.115000 audit: BPF prog-id=259 op=LOAD Jan 14 23:47:31.115000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.115000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.117000 audit: BPF prog-id=259 op=UNLOAD Jan 14 23:47:31.117000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.118000 audit: BPF prog-id=258 op=UNLOAD Jan 14 23:47:31.118000 audit[5247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.118000 audit: BPF prog-id=260 op=LOAD Jan 14 23:47:31.118000 audit[5247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2955 pid=5247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:31.118000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231366464343839623437393537623338663036616432393334643232 Jan 14 23:47:31.150151 containerd[1618]: time="2026-01-14T23:47:31.150058294Z" level=info msg="StartContainer for \"b16dd489b47957b38f06ad2934d2201e616bb4a6f81877b3725f44fd16e35ca6\" returns successfully" Jan 14 23:47:32.119545 kubelet[2853]: E0114 23:47:32.119248 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-8sg5s" podUID="1b0b15bb-ef3d-4cc7-a85f-a12ae2d1363a" Jan 14 23:47:32.120245 containerd[1618]: time="2026-01-14T23:47:32.118982642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:47:32.258962 systemd[1]: cri-containerd-d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633.scope: Deactivated successfully. Jan 14 23:47:32.259501 systemd[1]: cri-containerd-d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633.scope: Consumed 4.371s CPU time, 64.3M memory peak, 1.5M read from disk. Jan 14 23:47:32.260000 audit: BPF prog-id=261 op=LOAD Jan 14 23:47:32.260000 audit: BPF prog-id=84 op=UNLOAD Jan 14 23:47:32.262000 audit: BPF prog-id=103 op=UNLOAD Jan 14 23:47:32.262000 audit: BPF prog-id=107 op=UNLOAD Jan 14 23:47:32.263933 containerd[1618]: time="2026-01-14T23:47:32.263824571Z" level=info msg="received container exit event container_id:\"d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633\" id:\"d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633\" pid:2688 exit_status:1 exited_at:{seconds:1768434452 nanos:263482370}" Jan 14 23:47:32.300779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633-rootfs.mount: Deactivated successfully. Jan 14 23:47:32.466616 containerd[1618]: time="2026-01-14T23:47:32.466050222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:47:32.467947 containerd[1618]: time="2026-01-14T23:47:32.467866270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:47:32.468233 containerd[1618]: time="2026-01-14T23:47:32.467911310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:47:32.468796 kubelet[2853]: E0114 23:47:32.468385 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:47:32.468796 kubelet[2853]: E0114 23:47:32.468502 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:47:32.468796 kubelet[2853]: E0114 23:47:32.468603 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:47:32.470189 containerd[1618]: time="2026-01-14T23:47:32.470153719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:47:32.801324 containerd[1618]: time="2026-01-14T23:47:32.800592790Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:47:32.802602 containerd[1618]: time="2026-01-14T23:47:32.802454038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:47:32.802805 containerd[1618]: time="2026-01-14T23:47:32.802525918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:47:32.802974 kubelet[2853]: E0114 23:47:32.802893 2853 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:47:32.803075 kubelet[2853]: E0114 23:47:32.802974 2853 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:47:32.803623 kubelet[2853]: E0114 23:47:32.803095 2853 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7s4g4_calico-system(35e31491-f658-475f-aa1a-411d37af2884): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:47:32.803623 kubelet[2853]: E0114 23:47:32.803200 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7s4g4" podUID="35e31491-f658-475f-aa1a-411d37af2884" Jan 14 23:47:33.038954 kubelet[2853]: I0114 23:47:33.038873 2853 scope.go:117] "RemoveContainer" containerID="d3144b1085d574571e343d28a40689066b9b34ec393d02cc404a80e715aa0633" Jan 14 23:47:33.042312 containerd[1618]: time="2026-01-14T23:47:33.042252495Z" level=info msg="CreateContainer within sandbox \"e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 23:47:33.053688 containerd[1618]: time="2026-01-14T23:47:33.053631425Z" level=info msg="Container 34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:33.059342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876120974.mount: Deactivated successfully. Jan 14 23:47:33.069998 containerd[1618]: time="2026-01-14T23:47:33.069889416Z" level=info msg="CreateContainer within sandbox \"e3074a8842a3de1fc6568606a835c8fc8ad5b8fe6738c744fb83652109ffe3ec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670\"" Jan 14 23:47:33.071425 containerd[1618]: time="2026-01-14T23:47:33.071353583Z" level=info msg="StartContainer for \"34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670\"" Jan 14 23:47:33.074124 containerd[1618]: time="2026-01-14T23:47:33.073961754Z" level=info msg="connecting to shim 34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670" address="unix:///run/containerd/s/c47fad003f49b983c143f8098f28cab5a51fb8c16883dc74a8c8adc24c6215f2" protocol=ttrpc version=3 Jan 14 23:47:33.106679 systemd[1]: Started cri-containerd-34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670.scope - libcontainer container 34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670. Jan 14 23:47:33.131000 audit: BPF prog-id=262 op=LOAD Jan 14 23:47:33.132000 audit: BPF prog-id=263 op=LOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=263 op=UNLOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=264 op=LOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=265 op=LOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=265 op=UNLOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=264 op=UNLOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.132000 audit: BPF prog-id=266 op=LOAD Jan 14 23:47:33.132000 audit[5290]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2537 pid=5290 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:33.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334643965336637636334333234316136373961323832386164613163 Jan 14 23:47:33.165654 containerd[1618]: time="2026-01-14T23:47:33.165585918Z" level=info msg="StartContainer for \"34d9e3f7cc43241a679a2828ada1c1bb5729e6971f23412a2ef0a11858e6a670\" returns successfully" Jan 14 23:47:34.118545 kubelet[2853]: E0114 23:47:34.118212 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mbwp8" podUID="375f636a-b14c-4107-87ee-0c0815e9a9c0" Jan 14 23:47:34.120073 kubelet[2853]: E0114 23:47:34.119880 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84c479b4c5-75mhg" podUID="7c662993-1a64-424b-835f-c3688665f281" Jan 14 23:47:35.839192 systemd[1]: cri-containerd-7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b.scope: Deactivated successfully. Jan 14 23:47:35.840412 systemd[1]: cri-containerd-7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b.scope: Consumed 3.870s CPU time, 23.4M memory peak, 2.5M read from disk. Jan 14 23:47:35.844086 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 14 23:47:35.844171 kernel: audit: type=1334 audit(1768434455.841:903): prog-id=267 op=LOAD Jan 14 23:47:35.841000 audit: BPF prog-id=267 op=LOAD Jan 14 23:47:35.845909 kernel: audit: type=1334 audit(1768434455.843:904): prog-id=108 op=UNLOAD Jan 14 23:47:35.845995 kernel: audit: type=1334 audit(1768434455.843:905): prog-id=112 op=UNLOAD Jan 14 23:47:35.843000 audit: BPF prog-id=108 op=UNLOAD Jan 14 23:47:35.843000 audit: BPF prog-id=112 op=UNLOAD Jan 14 23:47:35.843000 audit: BPF prog-id=93 op=UNLOAD Jan 14 23:47:35.847565 kernel: audit: type=1334 audit(1768434455.843:906): prog-id=93 op=UNLOAD Jan 14 23:47:35.848774 containerd[1618]: time="2026-01-14T23:47:35.848661326Z" level=info msg="received container exit event container_id:\"7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b\" id:\"7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b\" pid:2706 exit_status:1 exited_at:{seconds:1768434455 nanos:848138243}" Jan 14 23:47:35.887057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b-rootfs.mount: Deactivated successfully. Jan 14 23:47:36.063514 kubelet[2853]: I0114 23:47:36.063480 2853 scope.go:117] "RemoveContainer" containerID="7566c3824af1a4db46258ba4854251486a8fd9daf1f3e5d02a2a825d4811b15b" Jan 14 23:47:36.066936 containerd[1618]: time="2026-01-14T23:47:36.066877942Z" level=info msg="CreateContainer within sandbox \"a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 23:47:36.079702 containerd[1618]: time="2026-01-14T23:47:36.078967642Z" level=info msg="Container 503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:36.094016 containerd[1618]: time="2026-01-14T23:47:36.093891116Z" level=info msg="CreateContainer within sandbox \"a738122a7f341e546e43965b9d9c77a0e4e63e24c2ec15fe8e108ab45bfc4933\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36\"" Jan 14 23:47:36.094874 containerd[1618]: time="2026-01-14T23:47:36.094843401Z" level=info msg="StartContainer for \"503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36\"" Jan 14 23:47:36.096504 containerd[1618]: time="2026-01-14T23:47:36.096367969Z" level=info msg="connecting to shim 503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36" address="unix:///run/containerd/s/51438c94343faa25432423961033a541073ef33d9e92994c67d6ececddba351a" protocol=ttrpc version=3 Jan 14 23:47:36.118044 kubelet[2853]: E0114 23:47:36.117992 2853 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7bfc4b59c4-h9rl4" podUID="0595d5fe-8d8f-4e95-8e85-0c22f59bd781" Jan 14 23:47:36.130634 systemd[1]: Started cri-containerd-503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36.scope - libcontainer container 503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36. Jan 14 23:47:36.151000 audit: BPF prog-id=268 op=LOAD Jan 14 23:47:36.156387 kernel: audit: type=1334 audit(1768434456.151:907): prog-id=268 op=LOAD Jan 14 23:47:36.156515 kernel: audit: type=1334 audit(1768434456.152:908): prog-id=269 op=LOAD Jan 14 23:47:36.156536 kernel: audit: type=1300 audit(1768434456.152:908): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.152000 audit: BPF prog-id=269 op=LOAD Jan 14 23:47:36.152000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.152000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.160509 kernel: audit: type=1327 audit(1768434456.152:908): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.162596 kernel: audit: type=1334 audit(1768434456.153:909): prog-id=269 op=UNLOAD Jan 14 23:47:36.153000 audit: BPF prog-id=269 op=UNLOAD Jan 14 23:47:36.153000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.165430 kernel: audit: type=1300 audit(1768434456.153:909): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.153000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.156000 audit: BPF prog-id=270 op=LOAD Jan 14 23:47:36.156000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.156000 audit: BPF prog-id=271 op=LOAD Jan 14 23:47:36.156000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.156000 audit: BPF prog-id=271 op=UNLOAD Jan 14 23:47:36.156000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.156000 audit: BPF prog-id=270 op=UNLOAD Jan 14 23:47:36.156000 audit[5334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.156000 audit: BPF prog-id=272 op=LOAD Jan 14 23:47:36.156000 audit[5334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2557 pid=5334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:36.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530336661313531663536383736653937613239336434343464386365 Jan 14 23:47:36.203761 containerd[1618]: time="2026-01-14T23:47:36.203684982Z" level=info msg="StartContainer for \"503fa151f56876e97a293d444d8ce01a1fbf3b6e6d79ec65c8ec81e5450d4c36\" returns successfully"