Jan 14 23:42:40.411224 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 23:42:40.411249 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 14 22:02:18 -00 2026 Jan 14 23:42:40.411260 kernel: KASLR enabled Jan 14 23:42:40.411266 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 14 23:42:40.411271 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 14 23:42:40.411277 kernel: random: crng init done Jan 14 23:42:40.411285 kernel: secureboot: Secure boot disabled Jan 14 23:42:40.411291 kernel: ACPI: Early table checksum verification disabled Jan 14 23:42:40.411297 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 14 23:42:40.411305 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 14 23:42:40.411311 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411317 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411323 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411329 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411338 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411344 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411351 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411357 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411363 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:42:40.411370 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 23:42:40.411376 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 14 23:42:40.411383 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 23:42:40.411389 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 23:42:40.411397 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 14 23:42:40.411403 kernel: Zone ranges: Jan 14 23:42:40.411409 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 23:42:40.411416 kernel: DMA32 empty Jan 14 23:42:40.411422 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 14 23:42:40.411428 kernel: Device empty Jan 14 23:42:40.411435 kernel: Movable zone start for each node Jan 14 23:42:40.411441 kernel: Early memory node ranges Jan 14 23:42:40.411448 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 14 23:42:40.411454 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 14 23:42:40.411461 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 14 23:42:40.411467 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 14 23:42:40.411475 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 14 23:42:40.411481 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 14 23:42:40.411487 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 14 23:42:40.411494 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 14 23:42:40.411500 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 14 23:42:40.411509 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 14 23:42:40.411518 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 14 23:42:40.411525 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 14 23:42:40.411532 kernel: psci: probing for conduit method from ACPI. Jan 14 23:42:40.411539 kernel: psci: PSCIv1.1 detected in firmware. Jan 14 23:42:40.411545 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 23:42:40.411552 kernel: psci: Trusted OS migration not required Jan 14 23:42:40.411559 kernel: psci: SMC Calling Convention v1.1 Jan 14 23:42:40.411568 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 14 23:42:40.411578 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 23:42:40.412298 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 23:42:40.412308 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 14 23:42:40.412315 kernel: Detected PIPT I-cache on CPU0 Jan 14 23:42:40.412323 kernel: CPU features: detected: GIC system register CPU interface Jan 14 23:42:40.412329 kernel: CPU features: detected: Spectre-v4 Jan 14 23:42:40.412336 kernel: CPU features: detected: Spectre-BHB Jan 14 23:42:40.412343 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 23:42:40.412350 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 23:42:40.412357 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 23:42:40.412365 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 23:42:40.412377 kernel: alternatives: applying boot alternatives Jan 14 23:42:40.412385 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:42:40.412393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 23:42:40.412400 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 23:42:40.412407 kernel: Fallback order for Node 0: 0 Jan 14 23:42:40.412414 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 14 23:42:40.412420 kernel: Policy zone: Normal Jan 14 23:42:40.412427 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 23:42:40.412434 kernel: software IO TLB: area num 2. Jan 14 23:42:40.412441 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 14 23:42:40.412449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 23:42:40.412456 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 23:42:40.412464 kernel: rcu: RCU event tracing is enabled. Jan 14 23:42:40.412471 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 23:42:40.412478 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 23:42:40.412485 kernel: Tracing variant of Tasks RCU enabled. Jan 14 23:42:40.412492 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 23:42:40.412499 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 23:42:40.412506 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:42:40.412513 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 23:42:40.412519 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 23:42:40.412528 kernel: GICv3: 256 SPIs implemented Jan 14 23:42:40.412534 kernel: GICv3: 0 Extended SPIs implemented Jan 14 23:42:40.412541 kernel: Root IRQ handler: gic_handle_irq Jan 14 23:42:40.412548 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 23:42:40.412555 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 23:42:40.412562 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 14 23:42:40.412569 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 14 23:42:40.412576 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 14 23:42:40.412594 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 14 23:42:40.413961 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 14 23:42:40.413972 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 14 23:42:40.413984 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 23:42:40.413991 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:42:40.413998 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 23:42:40.414005 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 23:42:40.414019 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 23:42:40.414028 kernel: Console: colour dummy device 80x25 Jan 14 23:42:40.414035 kernel: ACPI: Core revision 20240827 Jan 14 23:42:40.414048 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 23:42:40.414056 kernel: pid_max: default: 32768 minimum: 301 Jan 14 23:42:40.414065 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 23:42:40.414073 kernel: landlock: Up and running. Jan 14 23:42:40.414080 kernel: SELinux: Initializing. Jan 14 23:42:40.414087 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:42:40.414095 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 14 23:42:40.414102 kernel: rcu: Hierarchical SRCU implementation. Jan 14 23:42:40.414110 kernel: rcu: Max phase no-delay instances is 400. Jan 14 23:42:40.414117 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 23:42:40.414126 kernel: Remapping and enabling EFI services. Jan 14 23:42:40.414133 kernel: smp: Bringing up secondary CPUs ... Jan 14 23:42:40.414141 kernel: Detected PIPT I-cache on CPU1 Jan 14 23:42:40.414148 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 14 23:42:40.414156 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 14 23:42:40.414163 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:42:40.414170 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 23:42:40.414179 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 23:42:40.414186 kernel: SMP: Total of 2 processors activated. Jan 14 23:42:40.414198 kernel: CPU: All CPU(s) started at EL1 Jan 14 23:42:40.414208 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 23:42:40.414215 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 23:42:40.414223 kernel: CPU features: detected: Common not Private translations Jan 14 23:42:40.414231 kernel: CPU features: detected: CRC32 instructions Jan 14 23:42:40.414239 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 14 23:42:40.414248 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 23:42:40.414256 kernel: CPU features: detected: LSE atomic instructions Jan 14 23:42:40.414263 kernel: CPU features: detected: Privileged Access Never Jan 14 23:42:40.414271 kernel: CPU features: detected: RAS Extension Support Jan 14 23:42:40.414279 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 23:42:40.414288 kernel: alternatives: applying system-wide alternatives Jan 14 23:42:40.414296 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 14 23:42:40.414304 kernel: Memory: 3885988K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 188532K reserved, 16384K cma-reserved) Jan 14 23:42:40.414313 kernel: devtmpfs: initialized Jan 14 23:42:40.414320 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 23:42:40.414328 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 23:42:40.414336 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 23:42:40.414344 kernel: 0 pages in range for non-PLT usage Jan 14 23:42:40.414353 kernel: 515184 pages in range for PLT usage Jan 14 23:42:40.414360 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 23:42:40.414368 kernel: SMBIOS 3.0.0 present. Jan 14 23:42:40.414375 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 14 23:42:40.414383 kernel: DMI: Memory slots populated: 1/1 Jan 14 23:42:40.414391 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 23:42:40.414399 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 14 23:42:40.414408 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 23:42:40.414416 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 23:42:40.414424 kernel: audit: initializing netlink subsys (disabled) Jan 14 23:42:40.414432 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 14 23:42:40.414439 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 23:42:40.414447 kernel: cpuidle: using governor menu Jan 14 23:42:40.414455 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 23:42:40.414465 kernel: ASID allocator initialised with 32768 entries Jan 14 23:42:40.414472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 23:42:40.414480 kernel: Serial: AMBA PL011 UART driver Jan 14 23:42:40.414488 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 23:42:40.414495 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 23:42:40.414503 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 23:42:40.414511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 23:42:40.414518 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 23:42:40.414528 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 23:42:40.414535 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 23:42:40.414543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 23:42:40.414550 kernel: ACPI: Added _OSI(Module Device) Jan 14 23:42:40.414558 kernel: ACPI: Added _OSI(Processor Device) Jan 14 23:42:40.414566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 23:42:40.414573 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 23:42:40.414591 kernel: ACPI: Interpreter enabled Jan 14 23:42:40.414599 kernel: ACPI: Using GIC for interrupt routing Jan 14 23:42:40.414617 kernel: ACPI: MCFG table detected, 1 entries Jan 14 23:42:40.414625 kernel: ACPI: CPU0 has been hot-added Jan 14 23:42:40.414633 kernel: ACPI: CPU1 has been hot-added Jan 14 23:42:40.414640 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 14 23:42:40.414648 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 23:42:40.414658 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 23:42:40.414846 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 23:42:40.414935 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 23:42:40.415053 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 23:42:40.415148 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 14 23:42:40.415231 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 14 23:42:40.415246 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 14 23:42:40.415254 kernel: PCI host bridge to bus 0000:00 Jan 14 23:42:40.415348 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 14 23:42:40.415431 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 23:42:40.415515 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 14 23:42:40.417625 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 23:42:40.417781 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 14 23:42:40.417876 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 14 23:42:40.417974 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 14 23:42:40.418072 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 14 23:42:40.418166 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.418250 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 14 23:42:40.418331 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 23:42:40.418410 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:42:40.418489 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 14 23:42:40.418579 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.418690 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 14 23:42:40.418779 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 23:42:40.418859 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:42:40.418949 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.419070 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 14 23:42:40.419156 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 23:42:40.419236 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:42:40.419319 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 14 23:42:40.419406 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.419487 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 14 23:42:40.419565 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 23:42:40.420092 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:42:40.420188 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 14 23:42:40.420287 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.420369 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 14 23:42:40.420447 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 23:42:40.420525 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:42:40.420629 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 14 23:42:40.421839 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.421976 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 14 23:42:40.422904 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 23:42:40.423066 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:42:40.423179 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 14 23:42:40.423288 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.423394 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 14 23:42:40.423494 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 23:42:40.423615 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:42:40.423722 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 14 23:42:40.423826 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.424374 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 14 23:42:40.424478 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 23:42:40.426737 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:42:40.426869 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:42:40.426958 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 14 23:42:40.427056 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 23:42:40.427149 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:42:40.427237 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 14 23:42:40.427321 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 14 23:42:40.427421 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 23:42:40.427508 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 14 23:42:40.427617 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 14 23:42:40.427712 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 23:42:40.427805 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 23:42:40.427890 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 14 23:42:40.427984 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 14 23:42:40.428110 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 14 23:42:40.428200 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 14 23:42:40.428289 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:42:40.428374 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 14 23:42:40.428463 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:42:40.428545 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 14 23:42:40.429166 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 14 23:42:40.429296 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 14 23:42:40.429382 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 14 23:42:40.429466 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 14 23:42:40.429555 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 23:42:40.430370 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 14 23:42:40.430473 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 14 23:42:40.430556 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 23:42:40.430664 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 14 23:42:40.430750 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:42:40.430830 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:42:40.430915 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 14 23:42:40.430999 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 14 23:42:40.431098 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 14 23:42:40.431186 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 23:42:40.431267 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:42:40.431346 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:42:40.431430 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 23:42:40.431516 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 14 23:42:40.433087 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 14 23:42:40.433244 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 23:42:40.433330 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:42:40.433411 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:42:40.433504 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 23:42:40.433606 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:42:40.433690 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:42:40.433773 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 23:42:40.433854 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 14 23:42:40.433933 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 14 23:42:40.434031 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 23:42:40.434116 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:42:40.434196 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:42:40.434279 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 23:42:40.434358 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:42:40.434437 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:42:40.434528 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 14 23:42:40.434697 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 14 23:42:40.434795 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 14 23:42:40.434876 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 14 23:42:40.434958 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 14 23:42:40.435083 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 14 23:42:40.435179 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 14 23:42:40.435259 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 14 23:42:40.435354 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 14 23:42:40.435431 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 14 23:42:40.435513 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 14 23:42:40.435617 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 14 23:42:40.435707 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 14 23:42:40.435787 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 14 23:42:40.435868 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 14 23:42:40.435947 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 14 23:42:40.436038 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 14 23:42:40.436121 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 14 23:42:40.436207 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 14 23:42:40.436291 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 14 23:42:40.436372 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 14 23:42:40.436451 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 23:42:40.436532 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 14 23:42:40.436632 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 23:42:40.436715 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 14 23:42:40.436799 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 23:42:40.436880 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 14 23:42:40.436958 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 23:42:40.437051 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 14 23:42:40.437131 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 23:42:40.437210 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 14 23:42:40.437288 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 23:42:40.437368 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 14 23:42:40.437448 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 23:42:40.437526 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 14 23:42:40.437662 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 23:42:40.437749 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 14 23:42:40.437828 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 23:42:40.437914 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 14 23:42:40.438001 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 14 23:42:40.438121 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 14 23:42:40.438211 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 14 23:42:40.438292 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 14 23:42:40.438372 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 14 23:42:40.438451 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:42:40.438531 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:42:40.438640 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 14 23:42:40.438727 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 14 23:42:40.438807 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 14 23:42:40.438886 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:42:40.438974 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:42:40.439078 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 14 23:42:40.439164 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 14 23:42:40.439249 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 14 23:42:40.439332 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 14 23:42:40.439411 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:42:40.439489 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:42:40.439574 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 14 23:42:40.439674 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 14 23:42:40.439762 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 14 23:42:40.439843 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:42:40.439922 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:42:40.440006 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 14 23:42:40.440143 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 14 23:42:40.440231 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 14 23:42:40.441116 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 14 23:42:40.441211 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:42:40.443519 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:42:40.443666 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 14 23:42:40.443755 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 14 23:42:40.443846 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 14 23:42:40.443939 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 14 23:42:40.444059 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:42:40.444150 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:42:40.444238 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 14 23:42:40.444320 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 14 23:42:40.444404 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 14 23:42:40.444486 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 14 23:42:40.444568 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 14 23:42:40.444687 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:42:40.444776 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:42:40.444859 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 14 23:42:40.444942 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 14 23:42:40.445035 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:42:40.445120 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:42:40.445204 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 14 23:42:40.445283 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 14 23:42:40.445361 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:42:40.445440 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:42:40.445520 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 14 23:42:40.445604 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 23:42:40.445683 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 14 23:42:40.445769 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 14 23:42:40.445843 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 14 23:42:40.445917 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:42:40.445998 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 14 23:42:40.446089 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 14 23:42:40.446167 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:42:40.446255 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 14 23:42:40.446330 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 14 23:42:40.446405 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:42:40.446486 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 14 23:42:40.446567 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 14 23:42:40.448730 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:42:40.448832 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 14 23:42:40.448910 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 14 23:42:40.448984 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:42:40.449115 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 14 23:42:40.449198 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 14 23:42:40.449279 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:42:40.449365 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 14 23:42:40.449440 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 14 23:42:40.449514 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:42:40.449617 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 14 23:42:40.449696 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 14 23:42:40.449773 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:42:40.449860 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 14 23:42:40.449935 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 14 23:42:40.450010 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:42:40.450033 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 23:42:40.450041 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 23:42:40.450050 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 23:42:40.450058 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 23:42:40.450066 kernel: iommu: Default domain type: Translated Jan 14 23:42:40.450074 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 23:42:40.450083 kernel: efivars: Registered efivars operations Jan 14 23:42:40.450092 kernel: vgaarb: loaded Jan 14 23:42:40.450102 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 23:42:40.450110 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 23:42:40.450118 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 23:42:40.450127 kernel: pnp: PnP ACPI init Jan 14 23:42:40.450228 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 14 23:42:40.450242 kernel: pnp: PnP ACPI: found 1 devices Jan 14 23:42:40.450251 kernel: NET: Registered PF_INET protocol family Jan 14 23:42:40.450260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 23:42:40.450268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 14 23:42:40.450277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 23:42:40.450285 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 23:42:40.450293 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 14 23:42:40.450303 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 14 23:42:40.450311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:42:40.450319 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 14 23:42:40.450327 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 23:42:40.450424 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 14 23:42:40.450436 kernel: PCI: CLS 0 bytes, default 64 Jan 14 23:42:40.450445 kernel: kvm [1]: HYP mode not available Jan 14 23:42:40.450454 kernel: Initialise system trusted keyrings Jan 14 23:42:40.450464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 14 23:42:40.450472 kernel: Key type asymmetric registered Jan 14 23:42:40.450480 kernel: Asymmetric key parser 'x509' registered Jan 14 23:42:40.450488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 23:42:40.450497 kernel: io scheduler mq-deadline registered Jan 14 23:42:40.450505 kernel: io scheduler kyber registered Jan 14 23:42:40.450513 kernel: io scheduler bfq registered Jan 14 23:42:40.450523 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 23:42:40.450763 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 14 23:42:40.450853 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 14 23:42:40.450934 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.451056 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 14 23:42:40.451150 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 14 23:42:40.451237 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.451320 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 14 23:42:40.451400 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 14 23:42:40.451479 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.451561 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 14 23:42:40.451663 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 14 23:42:40.451745 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.451832 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 14 23:42:40.451916 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 14 23:42:40.451995 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.452094 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 14 23:42:40.452182 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 14 23:42:40.452301 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.453543 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 14 23:42:40.453674 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 14 23:42:40.453760 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.453845 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 14 23:42:40.453933 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 14 23:42:40.454060 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.454082 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 14 23:42:40.454176 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 14 23:42:40.454261 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 14 23:42:40.454342 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:42:40.454353 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 23:42:40.454362 kernel: ACPI: button: Power Button [PWRB] Jan 14 23:42:40.454370 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 23:42:40.454463 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 14 23:42:40.454552 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 14 23:42:40.454563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 23:42:40.454572 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 23:42:40.454684 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 14 23:42:40.454696 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 14 23:42:40.454704 kernel: thunder_xcv, ver 1.0 Jan 14 23:42:40.454716 kernel: thunder_bgx, ver 1.0 Jan 14 23:42:40.454724 kernel: nicpf, ver 1.0 Jan 14 23:42:40.454732 kernel: nicvf, ver 1.0 Jan 14 23:42:40.454831 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 23:42:40.454912 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T23:42:39 UTC (1768434159) Jan 14 23:42:40.454924 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 23:42:40.454934 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 23:42:40.454942 kernel: watchdog: NMI not fully supported Jan 14 23:42:40.454950 kernel: watchdog: Hard watchdog permanently disabled Jan 14 23:42:40.454959 kernel: NET: Registered PF_INET6 protocol family Jan 14 23:42:40.454967 kernel: Segment Routing with IPv6 Jan 14 23:42:40.454975 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 23:42:40.454983 kernel: NET: Registered PF_PACKET protocol family Jan 14 23:42:40.454992 kernel: Key type dns_resolver registered Jan 14 23:42:40.455000 kernel: registered taskstats version 1 Jan 14 23:42:40.455008 kernel: Loading compiled-in X.509 certificates Jan 14 23:42:40.455028 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a690a20944211e11dad41e677dd7158a4ddc3c87' Jan 14 23:42:40.455036 kernel: Demotion targets for Node 0: null Jan 14 23:42:40.455044 kernel: Key type .fscrypt registered Jan 14 23:42:40.455052 kernel: Key type fscrypt-provisioning registered Jan 14 23:42:40.455062 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 23:42:40.455071 kernel: ima: Allocated hash algorithm: sha1 Jan 14 23:42:40.455079 kernel: ima: No architecture policies found Jan 14 23:42:40.455088 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 23:42:40.455096 kernel: clk: Disabling unused clocks Jan 14 23:42:40.455104 kernel: PM: genpd: Disabling unused power domains Jan 14 23:42:40.455113 kernel: Freeing unused kernel memory: 12416K Jan 14 23:42:40.455121 kernel: Run /init as init process Jan 14 23:42:40.455130 kernel: with arguments: Jan 14 23:42:40.455139 kernel: /init Jan 14 23:42:40.455147 kernel: with environment: Jan 14 23:42:40.455155 kernel: HOME=/ Jan 14 23:42:40.455164 kernel: TERM=linux Jan 14 23:42:40.455172 kernel: ACPI: bus type USB registered Jan 14 23:42:40.455180 kernel: usbcore: registered new interface driver usbfs Jan 14 23:42:40.455190 kernel: usbcore: registered new interface driver hub Jan 14 23:42:40.455199 kernel: usbcore: registered new device driver usb Jan 14 23:42:40.455301 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:42:40.455387 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 14 23:42:40.455469 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 23:42:40.455554 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:42:40.457747 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 14 23:42:40.457858 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 14 23:42:40.457977 kernel: hub 1-0:1.0: USB hub found Jan 14 23:42:40.458103 kernel: hub 1-0:1.0: 4 ports detected Jan 14 23:42:40.458211 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 23:42:40.458310 kernel: hub 2-0:1.0: USB hub found Jan 14 23:42:40.458406 kernel: hub 2-0:1.0: 4 ports detected Jan 14 23:42:40.458416 kernel: SCSI subsystem initialized Jan 14 23:42:40.458517 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 14 23:42:40.458631 kernel: scsi host0: Virtio SCSI HBA Jan 14 23:42:40.458736 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 14 23:42:40.458842 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 14 23:42:40.458931 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 14 23:42:40.459058 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 14 23:42:40.459159 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 14 23:42:40.459249 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 14 23:42:40.459348 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 14 23:42:40.459359 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 23:42:40.459367 kernel: GPT:25804799 != 80003071 Jan 14 23:42:40.459377 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 23:42:40.459385 kernel: GPT:25804799 != 80003071 Jan 14 23:42:40.459393 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 23:42:40.459402 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 23:42:40.459490 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 14 23:42:40.459579 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 14 23:42:40.460619 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 14 23:42:40.460634 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 23:42:40.460727 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 14 23:42:40.460739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 23:42:40.460747 kernel: device-mapper: uevent: version 1.0.3 Jan 14 23:42:40.460762 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 23:42:40.460771 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 23:42:40.460779 kernel: raid6: neonx8 gen() 15673 MB/s Jan 14 23:42:40.460787 kernel: raid6: neonx4 gen() 13659 MB/s Jan 14 23:42:40.460794 kernel: raid6: neonx2 gen() 12959 MB/s Jan 14 23:42:40.460803 kernel: raid6: neonx1 gen() 10306 MB/s Jan 14 23:42:40.460810 kernel: raid6: int64x8 gen() 6754 MB/s Jan 14 23:42:40.460820 kernel: raid6: int64x4 gen() 5616 MB/s Jan 14 23:42:40.460829 kernel: raid6: int64x2 gen() 6060 MB/s Jan 14 23:42:40.460941 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 23:42:40.460954 kernel: raid6: int64x1 gen() 5020 MB/s Jan 14 23:42:40.460963 kernel: raid6: using algorithm neonx8 gen() 15673 MB/s Jan 14 23:42:40.460971 kernel: raid6: .... xor() 11926 MB/s, rmw enabled Jan 14 23:42:40.460981 kernel: raid6: using neon recovery algorithm Jan 14 23:42:40.460989 kernel: xor: measuring software checksum speed Jan 14 23:42:40.460998 kernel: 8regs : 21579 MB/sec Jan 14 23:42:40.461006 kernel: 32regs : 21681 MB/sec Jan 14 23:42:40.461061 kernel: arm64_neon : 25793 MB/sec Jan 14 23:42:40.461070 kernel: xor: using function: arm64_neon (25793 MB/sec) Jan 14 23:42:40.461078 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 23:42:40.461088 kernel: BTRFS: device fsid 78d59ed4-d19c-4fcc-8998-5f0c19b42daf devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (213) Jan 14 23:42:40.461100 kernel: BTRFS info (device dm-0): first mount of filesystem 78d59ed4-d19c-4fcc-8998-5f0c19b42daf Jan 14 23:42:40.461110 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:40.461119 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 14 23:42:40.461127 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 23:42:40.461136 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 23:42:40.461144 kernel: loop: module loaded Jan 14 23:42:40.461152 kernel: loop0: detected capacity change from 0 to 91488 Jan 14 23:42:40.461164 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 23:42:40.461284 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 14 23:42:40.461298 systemd[1]: Successfully made /usr/ read-only. Jan 14 23:42:40.461309 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:42:40.461318 systemd[1]: Detected virtualization kvm. Jan 14 23:42:40.461329 systemd[1]: Detected architecture arm64. Jan 14 23:42:40.461338 systemd[1]: Running in initrd. Jan 14 23:42:40.461347 systemd[1]: No hostname configured, using default hostname. Jan 14 23:42:40.461355 systemd[1]: Hostname set to . Jan 14 23:42:40.461364 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:42:40.461372 systemd[1]: Queued start job for default target initrd.target. Jan 14 23:42:40.461383 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:42:40.461392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:40.461400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:40.461410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 23:42:40.461419 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:42:40.461428 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 23:42:40.461439 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 23:42:40.461448 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:40.461458 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:40.461466 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:42:40.461474 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:42:40.461483 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:42:40.461491 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:42:40.461501 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:42:40.461509 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:42:40.461518 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:42:40.461526 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:40.461535 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 23:42:40.461544 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 23:42:40.461554 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:40.461563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:40.461571 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:40.461595 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:42:40.461606 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 23:42:40.461617 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 23:42:40.461627 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:42:40.461639 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 23:42:40.461648 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 23:42:40.461656 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 23:42:40.461666 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:42:40.461675 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:42:40.461686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:40.461695 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 23:42:40.461704 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:40.461739 systemd-journald[350]: Collecting audit messages is enabled. Jan 14 23:42:40.461762 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 23:42:40.461771 kernel: audit: type=1130 audit(1768434160.405:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.461780 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 23:42:40.461789 kernel: audit: type=1130 audit(1768434160.408:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.461799 kernel: Bridge firewalling registered Jan 14 23:42:40.461808 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:42:40.461816 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:40.461825 kernel: audit: type=1130 audit(1768434160.421:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.461834 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:42:40.461843 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:40.461852 kernel: audit: type=1130 audit(1768434160.453:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.461863 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:42:40.461872 systemd-journald[350]: Journal started Jan 14 23:42:40.461891 systemd-journald[350]: Runtime Journal (/run/log/journal/39acadcf220c455d9ce52f14cdaf9739) is 8M, max 76.5M, 68.5M free. Jan 14 23:42:40.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.416112 systemd-modules-load[351]: Inserted module 'br_netfilter' Jan 14 23:42:40.465699 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:42:40.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.469310 kernel: audit: type=1130 audit(1768434160.465:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.468943 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:40.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.472851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:40.476148 kernel: audit: type=1130 audit(1768434160.469:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.476181 kernel: audit: type=1130 audit(1768434160.472:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.480093 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 23:42:40.481000 audit: BPF prog-id=6 op=LOAD Jan 14 23:42:40.486613 kernel: audit: type=1334 audit(1768434160.481:9): prog-id=6 op=LOAD Jan 14 23:42:40.485303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:42:40.489864 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:42:40.493710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:40.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.499260 kernel: audit: type=1130 audit(1768434160.493:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.510367 systemd-tmpfiles[377]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 23:42:40.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.510739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:42:40.513800 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 23:42:40.523036 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:40.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.542745 dracut-cmdline[391]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:42:40.560137 systemd-resolved[376]: Positive Trust Anchors: Jan 14 23:42:40.560157 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:42:40.560160 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:42:40.560191 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:42:40.587459 systemd-resolved[376]: Defaulting to hostname 'linux'. Jan 14 23:42:40.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.588357 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:42:40.589191 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:40.657646 kernel: Loading iSCSI transport class v2.0-870. Jan 14 23:42:40.666913 kernel: iscsi: registered transport (tcp) Jan 14 23:42:40.684607 kernel: iscsi: registered transport (qla4xxx) Jan 14 23:42:40.684675 kernel: QLogic iSCSI HBA Driver Jan 14 23:42:40.712843 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:42:40.733790 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:40.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.736798 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:42:40.791658 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 23:42:40.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.795717 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 23:42:40.796997 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 23:42:40.841028 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:42:40.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.842000 audit: BPF prog-id=7 op=LOAD Jan 14 23:42:40.842000 audit: BPF prog-id=8 op=LOAD Jan 14 23:42:40.844118 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:40.878305 systemd-udevd[625]: Using default interface naming scheme 'v257'. Jan 14 23:42:40.887268 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:40.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.890859 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 23:42:40.923545 dracut-pre-trigger[677]: rd.md=0: removing MD RAID activation Jan 14 23:42:40.947962 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:42:40.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.949000 audit: BPF prog-id=9 op=LOAD Jan 14 23:42:40.951729 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:42:40.965194 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:42:40.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:40.969700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:42:40.999800 systemd-networkd[760]: lo: Link UP Jan 14 23:42:40.999808 systemd-networkd[760]: lo: Gained carrier Jan 14 23:42:41.001502 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:42:41.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.002355 systemd[1]: Reached target network.target - Network. Jan 14 23:42:41.039969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:41.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.041956 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 23:42:41.185609 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 14 23:42:41.190605 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 14 23:42:41.192597 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 14 23:42:41.203814 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 14 23:42:41.219407 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 14 23:42:41.242465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 23:42:41.247607 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 14 23:42:41.251843 kernel: usbcore: registered new interface driver usbhid Jan 14 23:42:41.251892 kernel: usbhid: USB HID core driver Jan 14 23:42:41.254969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 14 23:42:41.256400 systemd-networkd[760]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:41.256404 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:41.259883 systemd-networkd[760]: eth0: Link UP Jan 14 23:42:41.260089 systemd-networkd[760]: eth0: Gained carrier Jan 14 23:42:41.260104 systemd-networkd[760]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:41.262261 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 23:42:41.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.264910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:41.265046 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:41.268561 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:41.274821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:41.279270 systemd-networkd[760]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:41.279274 systemd-networkd[760]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:41.279633 systemd-networkd[760]: eth1: Link UP Jan 14 23:42:41.282784 systemd-networkd[760]: eth1: Gained carrier Jan 14 23:42:41.282799 systemd-networkd[760]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:41.289752 disk-uuid[815]: Primary Header is updated. Jan 14 23:42:41.289752 disk-uuid[815]: Secondary Entries is updated. Jan 14 23:42:41.289752 disk-uuid[815]: Secondary Header is updated. Jan 14 23:42:41.316463 systemd-networkd[760]: eth0: DHCPv4 address 49.13.216.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 23:42:41.327274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:41.328347 systemd-networkd[760]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 23:42:41.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.368766 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 23:42:41.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:41.370047 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:42:41.371357 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:41.372900 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:42:41.376169 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 23:42:41.404978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:42:41.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.320021 disk-uuid[817]: Warning: The kernel is still using the old partition table. Jan 14 23:42:42.320021 disk-uuid[817]: The new table will be used at the next reboot or after you Jan 14 23:42:42.320021 disk-uuid[817]: run partprobe(8) or kpartx(8) Jan 14 23:42:42.320021 disk-uuid[817]: The operation has completed successfully. Jan 14 23:42:42.331082 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 23:42:42.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.331283 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 23:42:42.334317 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 23:42:42.386705 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (844) Jan 14 23:42:42.386786 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:42.387923 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:42.391984 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:42.392060 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:42.392080 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:42.400628 kernel: BTRFS info (device sda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:42.400987 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 23:42:42.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.403971 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 23:42:42.533186 ignition[863]: Ignition 2.22.0 Jan 14 23:42:42.533204 ignition[863]: Stage: fetch-offline Jan 14 23:42:42.533249 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:42.533259 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:42.538026 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:42:42.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.533422 ignition[863]: parsed url from cmdline: "" Jan 14 23:42:42.533426 ignition[863]: no config URL provided Jan 14 23:42:42.533431 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:42:42.533439 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:42:42.544492 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 23:42:42.533443 ignition[863]: failed to fetch config: resource requires networking Jan 14 23:42:42.533728 ignition[863]: Ignition finished successfully Jan 14 23:42:42.577644 ignition[872]: Ignition 2.22.0 Jan 14 23:42:42.578403 ignition[872]: Stage: fetch Jan 14 23:42:42.578601 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:42.578611 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:42.578709 ignition[872]: parsed url from cmdline: "" Jan 14 23:42:42.578713 ignition[872]: no config URL provided Jan 14 23:42:42.578717 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:42:42.578723 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:42:42.578757 ignition[872]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 14 23:42:42.585557 ignition[872]: GET result: OK Jan 14 23:42:42.586135 ignition[872]: parsing config with SHA512: 55e2d626d3770c3d31ffbd1dc16ac651573edc52b042b8059ac436c0f154615b82c3d628fbc0865841b0143e8a6264d27f578a01f13b0eefccb503998c74e977 Jan 14 23:42:42.590159 unknown[872]: fetched base config from "system" Jan 14 23:42:42.590168 unknown[872]: fetched base config from "system" Jan 14 23:42:42.590173 unknown[872]: fetched user config from "hetzner" Jan 14 23:42:42.592719 ignition[872]: fetch: fetch complete Jan 14 23:42:42.592729 ignition[872]: fetch: fetch passed Jan 14 23:42:42.592835 ignition[872]: Ignition finished successfully Jan 14 23:42:42.596551 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 23:42:42.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.601037 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 23:42:42.638813 ignition[878]: Ignition 2.22.0 Jan 14 23:42:42.638828 ignition[878]: Stage: kargs Jan 14 23:42:42.638986 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:42.639007 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:42.644794 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 23:42:42.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.639826 ignition[878]: kargs: kargs passed Jan 14 23:42:42.639880 ignition[878]: Ignition finished successfully Jan 14 23:42:42.648391 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 23:42:42.689773 systemd-networkd[760]: eth0: Gained IPv6LL Jan 14 23:42:42.695475 ignition[885]: Ignition 2.22.0 Jan 14 23:42:42.696155 ignition[885]: Stage: disks Jan 14 23:42:42.696745 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:42.697286 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:42.698257 ignition[885]: disks: disks passed Jan 14 23:42:42.698313 ignition[885]: Ignition finished successfully Jan 14 23:42:42.701261 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 23:42:42.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.702313 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 23:42:42.703331 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 23:42:42.704505 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:42:42.705761 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:42:42.706747 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:42:42.708821 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 23:42:42.752207 systemd-fsck[893]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 23:42:42.758073 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 23:42:42.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.760719 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 23:42:42.842657 kernel: EXT4-fs (sda9): mounted filesystem 05dab3f9-40c2-46d9-a2a2-3da8ed7c4451 r/w with ordered data mode. Quota mode: none. Jan 14 23:42:42.844443 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 23:42:42.845182 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 23:42:42.847963 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:42:42.849929 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 23:42:42.860445 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 23:42:42.863799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 23:42:42.863909 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:42:42.869625 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (901) Jan 14 23:42:42.869839 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 23:42:42.872687 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:42.872711 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:42.876680 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 23:42:42.882321 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:42.882394 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:42.882421 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:42.882685 systemd-networkd[760]: eth1: Gained IPv6LL Jan 14 23:42:42.888955 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:42:42.936480 coreos-metadata[903]: Jan 14 23:42:42.935 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 14 23:42:42.938437 coreos-metadata[903]: Jan 14 23:42:42.937 INFO Fetch successful Jan 14 23:42:42.941729 coreos-metadata[903]: Jan 14 23:42:42.940 INFO wrote hostname ci-4515-1-0-n-abf6d467b1 to /sysroot/etc/hostname Jan 14 23:42:42.943096 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 23:42:42.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:42.947194 initrd-setup-root[929]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 23:42:42.951658 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Jan 14 23:42:42.956550 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 23:42:42.961337 initrd-setup-root[950]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 23:42:43.060676 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 23:42:43.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.064065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 23:42:43.066968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 23:42:43.084521 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 23:42:43.086755 kernel: BTRFS info (device sda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:43.108628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 23:42:43.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.125012 ignition[1019]: INFO : Ignition 2.22.0 Jan 14 23:42:43.125012 ignition[1019]: INFO : Stage: mount Jan 14 23:42:43.126255 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.126255 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.128406 ignition[1019]: INFO : mount: mount passed Jan 14 23:42:43.128406 ignition[1019]: INFO : Ignition finished successfully Jan 14 23:42:43.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:43.128690 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 23:42:43.130388 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 23:42:43.846061 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:42:43.878228 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1030) Jan 14 23:42:43.878306 kernel: BTRFS info (device sda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:42:43.878332 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:42:43.882634 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 14 23:42:43.882702 kernel: BTRFS info (device sda6): turning on async discard Jan 14 23:42:43.882721 kernel: BTRFS info (device sda6): enabling free space tree Jan 14 23:42:43.885201 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:42:43.918339 ignition[1047]: INFO : Ignition 2.22.0 Jan 14 23:42:43.920317 ignition[1047]: INFO : Stage: files Jan 14 23:42:43.920317 ignition[1047]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:43.920317 ignition[1047]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:43.920317 ignition[1047]: DEBUG : files: compiled without relabeling support, skipping Jan 14 23:42:43.923262 ignition[1047]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 23:42:43.923262 ignition[1047]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 23:42:43.929742 ignition[1047]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 23:42:43.931091 ignition[1047]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 23:42:43.933024 ignition[1047]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 23:42:43.931717 unknown[1047]: wrote ssh authorized keys file for user: core Jan 14 23:42:43.936944 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 23:42:43.936944 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 14 23:42:44.005696 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 23:42:44.086495 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:42:44.088820 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:42:44.099879 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 14 23:42:44.512801 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 23:42:46.333512 ignition[1047]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:42:46.333512 ignition[1047]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 23:42:46.339358 ignition[1047]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:42:46.345116 ignition[1047]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:42:46.345116 ignition[1047]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 23:42:46.345116 ignition[1047]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:42:46.353678 ignition[1047]: INFO : files: files passed Jan 14 23:42:46.353678 ignition[1047]: INFO : Ignition finished successfully Jan 14 23:42:46.364406 kernel: kauditd_printk_skb: 30 callbacks suppressed Jan 14 23:42:46.364432 kernel: audit: type=1130 audit(1768434166.353:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.353135 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 23:42:46.357866 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 23:42:46.362837 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 23:42:46.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.378153 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 23:42:46.383393 kernel: audit: type=1130 audit(1768434166.378:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.383422 kernel: audit: type=1131 audit(1768434166.378:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.378264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 23:42:46.388328 initrd-setup-root-after-ignition[1079]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:46.388328 initrd-setup-root-after-ignition[1079]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:46.391371 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:42:46.393767 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:42:46.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.395770 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 23:42:46.399310 kernel: audit: type=1130 audit(1768434166.395:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.399751 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 23:42:46.453741 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 23:42:46.455634 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 23:42:46.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.459016 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 23:42:46.461682 kernel: audit: type=1130 audit(1768434166.455:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.461711 kernel: audit: type=1131 audit(1768434166.458:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.459708 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 23:42:46.462424 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 23:42:46.463425 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 23:42:46.507159 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:42:46.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.513662 kernel: audit: type=1130 audit(1768434166.508:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.513809 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 23:42:46.539558 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:42:46.540663 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:46.542133 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:46.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.542954 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 23:42:46.543629 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 23:42:46.549595 kernel: audit: type=1131 audit(1768434166.543:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.543772 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:42:46.547072 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 23:42:46.547810 systemd[1]: Stopped target basic.target - Basic System. Jan 14 23:42:46.548919 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 23:42:46.550346 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:42:46.551540 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 23:42:46.552670 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:42:46.553747 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 23:42:46.554961 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:42:46.556313 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 23:42:46.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.557456 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 23:42:46.558551 systemd[1]: Stopped target swap.target - Swaps. Jan 14 23:42:46.564410 kernel: audit: type=1131 audit(1768434166.560:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.559532 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 23:42:46.559698 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:42:46.562791 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:46.564026 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:46.565103 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 23:42:46.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.566612 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:46.571431 kernel: audit: type=1131 audit(1768434166.568:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.567391 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 23:42:46.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.567522 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 23:42:46.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.570556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 23:42:46.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.570708 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:42:46.572363 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 23:42:46.572477 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 23:42:46.573397 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 23:42:46.573508 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 23:42:46.575376 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 23:42:46.579948 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 23:42:46.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.582133 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 23:42:46.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.582273 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:46.583625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 23:42:46.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.583733 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:46.584772 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 23:42:46.584872 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:42:46.597084 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 23:42:46.599380 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 23:42:46.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.611843 ignition[1103]: INFO : Ignition 2.22.0 Jan 14 23:42:46.613018 ignition[1103]: INFO : Stage: umount Jan 14 23:42:46.613018 ignition[1103]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:42:46.613018 ignition[1103]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 14 23:42:46.613473 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 23:42:46.617611 ignition[1103]: INFO : umount: umount passed Jan 14 23:42:46.617611 ignition[1103]: INFO : Ignition finished successfully Jan 14 23:42:46.618205 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 23:42:46.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.618326 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 23:42:46.620217 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 23:42:46.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.620330 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 23:42:46.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.621400 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 23:42:46.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.621445 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 23:42:46.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.622217 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 23:42:46.622268 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 23:42:46.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.623335 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 23:42:46.623380 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 23:42:46.624357 systemd[1]: Stopped target network.target - Network. Jan 14 23:42:46.625257 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 23:42:46.625312 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:42:46.626314 systemd[1]: Stopped target paths.target - Path Units. Jan 14 23:42:46.627157 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 23:42:46.630712 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:46.632281 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 23:42:46.633897 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 23:42:46.635230 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 23:42:46.635297 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:42:46.636525 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 23:42:46.636558 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:42:46.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.637424 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 23:42:46.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.637448 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:46.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.638405 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 23:42:46.638462 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 23:42:46.639365 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 23:42:46.639408 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 23:42:46.640311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 23:42:46.640356 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 23:42:46.641402 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 23:42:46.642431 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 23:42:46.649321 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 23:42:46.649468 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 23:42:46.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.654000 audit: BPF prog-id=6 op=UNLOAD Jan 14 23:42:46.657361 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 23:42:46.657505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 23:42:46.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.661000 audit: BPF prog-id=9 op=UNLOAD Jan 14 23:42:46.663059 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 23:42:46.664511 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 23:42:46.665360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:46.667930 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 23:42:46.669133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 23:42:46.669756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:42:46.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.671501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 23:42:46.672154 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:46.673386 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 23:42:46.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.674125 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:46.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.675663 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:46.690508 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 23:42:46.690864 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:46.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.695094 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 23:42:46.695210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:46.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.696108 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 23:42:46.696141 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:46.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.697232 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 23:42:46.697283 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:42:46.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.698558 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 23:42:46.698659 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 23:42:46.701186 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 23:42:46.701236 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:42:46.710461 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 23:42:46.712040 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 23:42:46.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.712137 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:46.714331 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 23:42:46.714390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:46.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.718231 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 23:42:46.718284 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:46.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.720740 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 23:42:46.721021 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:46.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.722646 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:46.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.722725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:46.724880 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 23:42:46.725034 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 23:42:46.733603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 23:42:46.733780 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 23:42:46.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:46.736400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 23:42:46.738558 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 23:42:46.760148 systemd[1]: Switching root. Jan 14 23:42:46.806866 systemd-journald[350]: Journal stopped Jan 14 23:42:47.813679 systemd-journald[350]: Received SIGTERM from PID 1 (systemd). Jan 14 23:42:47.813744 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 23:42:47.813757 kernel: SELinux: policy capability open_perms=1 Jan 14 23:42:47.813767 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 23:42:47.813777 kernel: SELinux: policy capability always_check_network=0 Jan 14 23:42:47.813790 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 23:42:47.813803 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 23:42:47.813816 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 23:42:47.813828 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 23:42:47.813841 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 23:42:47.813851 systemd[1]: Successfully loaded SELinux policy in 64.843ms. Jan 14 23:42:47.813869 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.263ms. Jan 14 23:42:47.813880 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:42:47.813892 systemd[1]: Detected virtualization kvm. Jan 14 23:42:47.813903 systemd[1]: Detected architecture arm64. Jan 14 23:42:47.813913 systemd[1]: Detected first boot. Jan 14 23:42:47.813927 systemd[1]: Hostname set to . Jan 14 23:42:47.813938 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:42:47.813948 zram_generator::config[1146]: No configuration found. Jan 14 23:42:47.813980 kernel: NET: Registered PF_VSOCK protocol family Jan 14 23:42:47.813992 systemd[1]: Populated /etc with preset unit settings. Jan 14 23:42:47.814003 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 23:42:47.814014 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 23:42:47.814024 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 23:42:47.814035 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 23:42:47.814048 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 23:42:47.814062 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 23:42:47.814077 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 23:42:47.814092 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 23:42:47.814108 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 23:42:47.814120 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 23:42:47.814131 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 23:42:47.814145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:42:47.814159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:42:47.814173 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 23:42:47.814186 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 23:42:47.814197 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 23:42:47.814208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:42:47.814218 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 23:42:47.814233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:42:47.814245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:42:47.814256 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 23:42:47.814266 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 23:42:47.814279 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 23:42:47.814290 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 23:42:47.814302 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:42:47.814313 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:42:47.814324 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 23:42:47.814335 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:42:47.814346 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:42:47.814357 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 23:42:47.814368 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 23:42:47.814380 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 23:42:47.814392 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:42:47.814403 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 23:42:47.814413 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:42:47.814424 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 23:42:47.814435 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 23:42:47.814446 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:42:47.814458 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:42:47.814472 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 23:42:47.814483 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 23:42:47.814495 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 23:42:47.814505 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 23:42:47.814516 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 23:42:47.814527 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 23:42:47.814537 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 23:42:47.814550 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 23:42:47.814561 systemd[1]: Reached target machines.target - Containers. Jan 14 23:42:47.814571 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 23:42:47.819695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:47.819736 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:42:47.819748 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 23:42:47.819765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:47.819777 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:42:47.819787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:47.819799 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 23:42:47.819809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:47.819821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 23:42:47.819832 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 23:42:47.819844 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 23:42:47.819855 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 23:42:47.819866 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 23:42:47.819881 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:47.819892 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:42:47.819904 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:42:47.819915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:42:47.819931 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 23:42:47.819942 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 23:42:47.819953 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:42:47.820005 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 23:42:47.820019 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 23:42:47.820031 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 23:42:47.820042 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 23:42:47.820053 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 23:42:47.820066 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 23:42:47.820077 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:42:47.820088 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 23:42:47.820099 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 23:42:47.820110 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:47.820120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:47.820132 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:47.820143 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:47.820154 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:47.820164 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:47.820175 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:42:47.820186 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:42:47.820197 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 23:42:47.820209 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 23:42:47.820220 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 23:42:47.820231 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:42:47.820242 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 23:42:47.820253 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 23:42:47.820264 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:42:47.820275 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 23:42:47.820287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:47.820299 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:47.820342 systemd-journald[1218]: Collecting audit messages is enabled. Jan 14 23:42:47.820372 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 23:42:47.820386 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:47.820400 kernel: fuse: init (API version 7.41) Jan 14 23:42:47.820416 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 23:42:47.820434 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:47.820449 systemd-journald[1218]: Journal started Jan 14 23:42:47.820474 systemd-journald[1218]: Runtime Journal (/run/log/journal/39acadcf220c455d9ce52f14cdaf9739) is 8M, max 76.5M, 68.5M free. Jan 14 23:42:47.823995 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:42:47.578000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 23:42:47.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.692000 audit: BPF prog-id=14 op=UNLOAD Jan 14 23:42:47.692000 audit: BPF prog-id=13 op=UNLOAD Jan 14 23:42:47.694000 audit: BPF prog-id=15 op=LOAD Jan 14 23:42:47.694000 audit: BPF prog-id=16 op=LOAD Jan 14 23:42:47.694000 audit: BPF prog-id=17 op=LOAD Jan 14 23:42:47.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.806000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 23:42:47.806000 audit[1218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcb3dc8a0 a2=4000 a3=0 items=0 ppid=1 pid=1218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:42:47.806000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 23:42:47.514616 systemd[1]: Queued start job for default target multi-user.target. Jan 14 23:42:47.526899 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 23:42:47.527377 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 23:42:47.835298 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 23:42:47.837603 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:42:47.841860 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:42:47.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.841764 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 23:42:47.843154 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 23:42:47.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.848120 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 23:42:47.857774 kernel: ACPI: bus type drm_connector registered Jan 14 23:42:47.862849 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:42:47.865834 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:42:47.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.872122 kernel: loop1: detected capacity change from 0 to 109872 Jan 14 23:42:47.876742 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 23:42:47.880833 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 23:42:47.889833 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 23:42:47.909654 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:42:47.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.913840 systemd-journald[1218]: Time spent on flushing to /var/log/journal/39acadcf220c455d9ce52f14cdaf9739 is 39.067ms for 1302 entries. Jan 14 23:42:47.913840 systemd-journald[1218]: System Journal (/var/log/journal/39acadcf220c455d9ce52f14cdaf9739) is 8M, max 588.1M, 580.1M free. Jan 14 23:42:47.968848 systemd-journald[1218]: Received client request to flush runtime journal. Jan 14 23:42:47.968914 kernel: loop2: detected capacity change from 0 to 100192 Jan 14 23:42:47.968938 kernel: loop3: detected capacity change from 0 to 207008 Jan 14 23:42:47.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.917352 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 14 23:42:47.917362 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 14 23:42:47.929223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:42:47.938548 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 23:42:47.970484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 23:42:47.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.978144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:42:47.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.981409 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 23:42:47.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.994579 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 23:42:47.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:47.995000 audit: BPF prog-id=18 op=LOAD Jan 14 23:42:47.996000 audit: BPF prog-id=19 op=LOAD Jan 14 23:42:47.996000 audit: BPF prog-id=20 op=LOAD Jan 14 23:42:48.000354 kernel: loop4: detected capacity change from 0 to 8 Jan 14 23:42:47.999000 audit: BPF prog-id=21 op=LOAD Jan 14 23:42:47.998845 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 23:42:48.001758 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:42:48.006414 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:42:48.015000 audit: BPF prog-id=22 op=LOAD Jan 14 23:42:48.015000 audit: BPF prog-id=23 op=LOAD Jan 14 23:42:48.015000 audit: BPF prog-id=24 op=LOAD Jan 14 23:42:48.019527 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 23:42:48.024739 kernel: loop5: detected capacity change from 0 to 109872 Jan 14 23:42:48.023000 audit: BPF prog-id=25 op=LOAD Jan 14 23:42:48.023000 audit: BPF prog-id=26 op=LOAD Jan 14 23:42:48.023000 audit: BPF prog-id=27 op=LOAD Jan 14 23:42:48.025024 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 23:42:48.039983 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 14 23:42:48.040282 systemd-tmpfiles[1290]: ACLs are not supported, ignoring. Jan 14 23:42:48.051122 kernel: loop6: detected capacity change from 0 to 100192 Jan 14 23:42:48.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.049776 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:42:48.060607 kernel: loop7: detected capacity change from 0 to 207008 Jan 14 23:42:48.077329 systemd-nsresourced[1291]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 23:42:48.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.081105 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 23:42:48.083691 kernel: loop1: detected capacity change from 0 to 8 Jan 14 23:42:48.085994 (sd-merge)[1292]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 14 23:42:48.091035 (sd-merge)[1292]: Merged extensions into '/usr'. Jan 14 23:42:48.105778 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 23:42:48.105795 systemd[1]: Reloading... Jan 14 23:42:48.214609 zram_generator::config[1340]: No configuration found. Jan 14 23:42:48.272374 systemd-oomd[1286]: No swap; memory pressure usage will be degraded Jan 14 23:42:48.295474 systemd-resolved[1288]: Positive Trust Anchors: Jan 14 23:42:48.295926 systemd-resolved[1288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:42:48.296035 systemd-resolved[1288]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:42:48.296070 systemd-resolved[1288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:42:48.310920 systemd-resolved[1288]: Using system hostname 'ci-4515-1-0-n-abf6d467b1'. Jan 14 23:42:48.440571 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 23:42:48.440879 systemd[1]: Reloading finished in 334 ms. Jan 14 23:42:48.455577 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 23:42:48.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.457894 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 23:42:48.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.459231 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:42:48.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.460389 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 23:42:48.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.464342 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:42:48.477000 audit: BPF prog-id=28 op=LOAD Jan 14 23:42:48.477000 audit: BPF prog-id=18 op=UNLOAD Jan 14 23:42:48.477000 audit: BPF prog-id=29 op=LOAD Jan 14 23:42:48.477000 audit: BPF prog-id=30 op=LOAD Jan 14 23:42:48.477000 audit: BPF prog-id=19 op=UNLOAD Jan 14 23:42:48.477000 audit: BPF prog-id=20 op=UNLOAD Jan 14 23:42:48.478000 audit: BPF prog-id=31 op=LOAD Jan 14 23:42:48.473778 systemd[1]: Starting ensure-sysext.service... Jan 14 23:42:48.476761 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:42:48.479000 audit: BPF prog-id=25 op=UNLOAD Jan 14 23:42:48.480000 audit: BPF prog-id=32 op=LOAD Jan 14 23:42:48.480000 audit: BPF prog-id=33 op=LOAD Jan 14 23:42:48.480000 audit: BPF prog-id=26 op=UNLOAD Jan 14 23:42:48.480000 audit: BPF prog-id=27 op=UNLOAD Jan 14 23:42:48.482000 audit: BPF prog-id=34 op=LOAD Jan 14 23:42:48.482000 audit: BPF prog-id=21 op=UNLOAD Jan 14 23:42:48.482000 audit: BPF prog-id=35 op=LOAD Jan 14 23:42:48.482000 audit: BPF prog-id=22 op=UNLOAD Jan 14 23:42:48.483000 audit: BPF prog-id=36 op=LOAD Jan 14 23:42:48.483000 audit: BPF prog-id=37 op=LOAD Jan 14 23:42:48.483000 audit: BPF prog-id=23 op=UNLOAD Jan 14 23:42:48.484000 audit: BPF prog-id=24 op=UNLOAD Jan 14 23:42:48.484000 audit: BPF prog-id=38 op=LOAD Jan 14 23:42:48.488000 audit: BPF prog-id=15 op=UNLOAD Jan 14 23:42:48.488000 audit: BPF prog-id=39 op=LOAD Jan 14 23:42:48.488000 audit: BPF prog-id=40 op=LOAD Jan 14 23:42:48.488000 audit: BPF prog-id=16 op=UNLOAD Jan 14 23:42:48.488000 audit: BPF prog-id=17 op=UNLOAD Jan 14 23:42:48.505069 systemd[1]: Reload requested from client PID 1373 ('systemctl') (unit ensure-sysext.service)... Jan 14 23:42:48.505087 systemd[1]: Reloading... Jan 14 23:42:48.506808 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 23:42:48.506840 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 23:42:48.507089 systemd-tmpfiles[1374]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 23:42:48.508082 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 14 23:42:48.508137 systemd-tmpfiles[1374]: ACLs are not supported, ignoring. Jan 14 23:42:48.519181 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:42:48.519195 systemd-tmpfiles[1374]: Skipping /boot Jan 14 23:42:48.527245 systemd-tmpfiles[1374]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:42:48.527260 systemd-tmpfiles[1374]: Skipping /boot Jan 14 23:42:48.585661 zram_generator::config[1409]: No configuration found. Jan 14 23:42:48.758722 systemd[1]: Reloading finished in 253 ms. Jan 14 23:42:48.774161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 23:42:48.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.776000 audit: BPF prog-id=41 op=LOAD Jan 14 23:42:48.776000 audit: BPF prog-id=28 op=UNLOAD Jan 14 23:42:48.776000 audit: BPF prog-id=42 op=LOAD Jan 14 23:42:48.776000 audit: BPF prog-id=43 op=LOAD Jan 14 23:42:48.776000 audit: BPF prog-id=29 op=UNLOAD Jan 14 23:42:48.776000 audit: BPF prog-id=30 op=UNLOAD Jan 14 23:42:48.777000 audit: BPF prog-id=44 op=LOAD Jan 14 23:42:48.777000 audit: BPF prog-id=38 op=UNLOAD Jan 14 23:42:48.777000 audit: BPF prog-id=45 op=LOAD Jan 14 23:42:48.777000 audit: BPF prog-id=46 op=LOAD Jan 14 23:42:48.777000 audit: BPF prog-id=39 op=UNLOAD Jan 14 23:42:48.778000 audit: BPF prog-id=40 op=UNLOAD Jan 14 23:42:48.778000 audit: BPF prog-id=47 op=LOAD Jan 14 23:42:48.778000 audit: BPF prog-id=35 op=UNLOAD Jan 14 23:42:48.778000 audit: BPF prog-id=48 op=LOAD Jan 14 23:42:48.778000 audit: BPF prog-id=49 op=LOAD Jan 14 23:42:48.778000 audit: BPF prog-id=36 op=UNLOAD Jan 14 23:42:48.778000 audit: BPF prog-id=37 op=UNLOAD Jan 14 23:42:48.779000 audit: BPF prog-id=50 op=LOAD Jan 14 23:42:48.779000 audit: BPF prog-id=34 op=UNLOAD Jan 14 23:42:48.780000 audit: BPF prog-id=51 op=LOAD Jan 14 23:42:48.780000 audit: BPF prog-id=31 op=UNLOAD Jan 14 23:42:48.780000 audit: BPF prog-id=52 op=LOAD Jan 14 23:42:48.780000 audit: BPF prog-id=53 op=LOAD Jan 14 23:42:48.780000 audit: BPF prog-id=32 op=UNLOAD Jan 14 23:42:48.780000 audit: BPF prog-id=33 op=UNLOAD Jan 14 23:42:48.782572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:42:48.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.794739 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 23:42:48.796540 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 23:42:48.800814 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:42:48.803769 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 23:42:48.810694 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 23:42:48.815866 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 23:42:48.816000 audit: BPF prog-id=8 op=UNLOAD Jan 14 23:42:48.816000 audit: BPF prog-id=7 op=UNLOAD Jan 14 23:42:48.816000 audit: BPF prog-id=54 op=LOAD Jan 14 23:42:48.818000 audit: BPF prog-id=55 op=LOAD Jan 14 23:42:48.820088 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:42:48.824285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 23:42:48.828043 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 23:42:48.829921 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 23:42:48.843106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.847820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:48.854267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:48.859190 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:48.860014 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.860233 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.860331 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.862000 audit[1455]: SYSTEM_BOOT pid=1455 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.867001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.867233 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.867415 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.867561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.872400 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:48.881234 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:42:48.882894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:48.883112 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:48.883207 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:48.884773 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 23:42:48.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.896823 systemd[1]: Finished ensure-sysext.service. Jan 14 23:42:48.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.898869 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 23:42:48.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.905109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:48.905675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:48.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.915248 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:48.917000 audit: BPF prog-id=56 op=LOAD Jan 14 23:42:48.923495 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 14 23:42:48.926638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:48.926892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:48.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.928138 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:42:48.929278 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:42:48.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.938180 systemd-udevd[1452]: Using default interface naming scheme 'v257'. Jan 14 23:42:48.940517 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:48.941424 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:48.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:42:48.945014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:48.979000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 23:42:48.979000 audit[1488]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffccc99cc0 a2=420 a3=0 items=0 ppid=1447 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:42:48.979000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:42:48.980503 augenrules[1488]: No rules Jan 14 23:42:48.981712 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:42:48.985875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:42:48.987224 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:42:48.987715 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:42:49.001541 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 23:42:49.007050 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 23:42:49.077105 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 14 23:42:49.079756 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 23:42:49.102191 systemd-networkd[1498]: lo: Link UP Jan 14 23:42:49.102201 systemd-networkd[1498]: lo: Gained carrier Jan 14 23:42:49.107188 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:42:49.108201 systemd[1]: Reached target network.target - Network. Jan 14 23:42:49.111463 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 23:42:49.114842 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 23:42:49.118414 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 23:42:49.154241 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 23:42:49.244834 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:49.245011 systemd-networkd[1498]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:49.247157 systemd-networkd[1498]: eth0: Link UP Jan 14 23:42:49.248869 systemd-networkd[1498]: eth0: Gained carrier Jan 14 23:42:49.249257 systemd-networkd[1498]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:49.261354 systemd-networkd[1498]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:49.261579 systemd-networkd[1498]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:42:49.263916 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 23:42:49.264301 systemd-networkd[1498]: eth1: Link UP Jan 14 23:42:49.265878 systemd-networkd[1498]: eth1: Gained carrier Jan 14 23:42:49.266144 systemd-networkd[1498]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:42:49.298841 systemd-networkd[1498]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 14 23:42:49.300118 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 14 23:42:49.320685 systemd-networkd[1498]: eth0: DHCPv4 address 49.13.216.16/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 14 23:42:49.321445 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 14 23:42:49.384937 ldconfig[1449]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 23:42:49.391085 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 14 23:42:49.391139 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 23:42:49.391171 kernel: [drm] features: -context_init Jan 14 23:42:49.391973 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 23:42:49.396413 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 23:42:49.402564 kernel: [drm] number of scanouts: 1 Jan 14 23:42:49.402683 kernel: [drm] number of cap sets: 0 Jan 14 23:42:49.402698 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 14 23:42:49.414353 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 23:42:49.422259 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 14 23:42:49.424644 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 23:42:49.432613 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 23:42:49.433253 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 14 23:42:49.435701 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:42:49.436975 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:42:49.439504 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:42:49.443891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:42:49.444615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:42:49.444718 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:42:49.447858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 23:42:49.448524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:42:49.448564 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 23:42:49.489215 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:42:49.489676 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:42:49.491117 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:42:49.491335 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:42:49.492313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:42:49.495655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:42:49.495925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:42:49.497520 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:42:49.498565 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 23:42:49.501744 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 23:42:49.502614 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 23:42:49.503300 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 23:42:49.504732 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 23:42:49.505506 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 23:42:49.507024 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 23:42:49.507686 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 23:42:49.507716 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:42:49.508197 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:42:49.511065 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 23:42:49.513853 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 23:42:49.516297 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 23:42:49.517866 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 23:42:49.518560 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 23:42:49.523675 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 23:42:49.524705 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 23:42:49.525619 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:42:49.526443 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 23:42:49.527697 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 23:42:49.530317 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:42:49.531266 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:42:49.532772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:42:49.532803 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:42:49.534072 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 23:42:49.537563 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 23:42:49.540628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 23:42:49.545895 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 23:42:49.552708 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 23:42:49.556675 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 23:42:49.557293 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 23:42:49.579834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 23:42:49.582663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 23:42:49.586913 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 14 23:42:49.596282 jq[1575]: false Jan 14 23:42:49.596787 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 23:42:49.601248 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 23:42:49.605362 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 23:42:49.607685 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 23:42:49.608236 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 23:42:49.611024 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 23:42:49.617494 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 23:42:49.627787 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 23:42:49.630169 extend-filesystems[1576]: Found /dev/sda6 Jan 14 23:42:49.628807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 23:42:49.629654 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 23:42:49.648155 extend-filesystems[1576]: Found /dev/sda9 Jan 14 23:42:49.665619 extend-filesystems[1576]: Checking size of /dev/sda9 Jan 14 23:42:49.667473 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 23:42:49.668228 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 23:42:49.670764 jq[1589]: true Jan 14 23:42:49.692763 coreos-metadata[1572]: Jan 14 23:42:49.691 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 14 23:42:49.695483 tar[1594]: linux-arm64/LICENSE Jan 14 23:42:49.695483 tar[1594]: linux-arm64/helm Jan 14 23:42:49.718627 coreos-metadata[1572]: Jan 14 23:42:49.711 INFO Fetch successful Jan 14 23:42:49.718627 coreos-metadata[1572]: Jan 14 23:42:49.712 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 14 23:42:49.718627 coreos-metadata[1572]: Jan 14 23:42:49.712 INFO Fetch successful Jan 14 23:42:49.730393 extend-filesystems[1576]: Resized partition /dev/sda9 Jan 14 23:42:49.731575 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 23:42:49.742170 dbus-daemon[1573]: [system] SELinux support is enabled Jan 14 23:42:49.742858 extend-filesystems[1623]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 23:42:49.746381 jq[1611]: true Jan 14 23:42:49.731894 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 23:42:49.742505 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 23:42:49.746048 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 23:42:49.746251 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 23:42:49.747535 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 23:42:49.747655 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 23:42:49.760750 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 14 23:42:49.760825 update_engine[1588]: I20260114 23:42:49.757480 1588 main.cc:92] Flatcar Update Engine starting Jan 14 23:42:49.769259 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:49.777126 systemd[1]: Started update-engine.service - Update Engine. Jan 14 23:42:49.780687 update_engine[1588]: I20260114 23:42:49.779760 1588 update_check_scheduler.cc:74] Next update check in 9m1s Jan 14 23:42:49.792907 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 23:42:49.819031 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:42:49.819769 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:49.840161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:42:49.881127 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 14 23:42:49.881423 bash[1652]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:42:49.886160 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 23:42:49.893681 systemd[1]: Starting sshkeys.service... Jan 14 23:42:49.900255 extend-filesystems[1623]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 14 23:42:49.900255 extend-filesystems[1623]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 14 23:42:49.900255 extend-filesystems[1623]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 14 23:42:49.908193 extend-filesystems[1576]: Resized filesystem in /dev/sda9 Jan 14 23:42:49.904338 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 23:42:49.905143 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 23:42:49.956197 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 23:42:49.957216 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 23:42:49.973100 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 23:42:49.978110 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 23:42:49.999696 containerd[1607]: time="2026-01-14T23:42:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 23:42:49.999696 containerd[1607]: time="2026-01-14T23:42:49.998988400Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 23:42:50.076621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:42:50.083170 containerd[1607]: time="2026-01-14T23:42:50.083033000Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.28µs" Jan 14 23:42:50.083170 containerd[1607]: time="2026-01-14T23:42:50.083073280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 23:42:50.083170 containerd[1607]: time="2026-01-14T23:42:50.083123280Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 23:42:50.083170 containerd[1607]: time="2026-01-14T23:42:50.083135200Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 23:42:50.083325 containerd[1607]: time="2026-01-14T23:42:50.083285200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 23:42:50.083325 containerd[1607]: time="2026-01-14T23:42:50.083302040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083370 containerd[1607]: time="2026-01-14T23:42:50.083358800Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083388 containerd[1607]: time="2026-01-14T23:42:50.083371520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083684 containerd[1607]: time="2026-01-14T23:42:50.083656200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083684 containerd[1607]: time="2026-01-14T23:42:50.083678320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083742 containerd[1607]: time="2026-01-14T23:42:50.083690760Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083742 containerd[1607]: time="2026-01-14T23:42:50.083701320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083874 containerd[1607]: time="2026-01-14T23:42:50.083850920Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083874 containerd[1607]: time="2026-01-14T23:42:50.083869520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 23:42:50.083997 containerd[1607]: time="2026-01-14T23:42:50.083936040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.084205 containerd[1607]: time="2026-01-14T23:42:50.084179640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.084244 containerd[1607]: time="2026-01-14T23:42:50.084216160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:42:50.084244 containerd[1607]: time="2026-01-14T23:42:50.084226360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 23:42:50.084333 containerd[1607]: time="2026-01-14T23:42:50.084255000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 23:42:50.084564 containerd[1607]: time="2026-01-14T23:42:50.084541920Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 23:42:50.093016 containerd[1607]: time="2026-01-14T23:42:50.092674920Z" level=info msg="metadata content store policy set" policy=shared Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106429720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106502800Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106622560Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106638960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106653520Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106665400Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106676960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106687280Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106698920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106712520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106723640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106739920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106755080Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 23:42:50.110070 containerd[1607]: time="2026-01-14T23:42:50.106772280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 23:42:50.110389 coreos-metadata[1669]: Jan 14 23:42:50.110 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.106919600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.106956920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.106974680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.106985080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107002560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107012800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107024400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107040560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107058280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107070600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107081040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107108440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107147440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107161440Z" level=info msg="Start snapshots syncer" Jan 14 23:42:50.110634 containerd[1607]: time="2026-01-14T23:42:50.107178840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 23:42:50.110882 containerd[1607]: time="2026-01-14T23:42:50.107539760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 23:42:50.112043 containerd[1607]: time="2026-01-14T23:42:50.111641640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 23:42:50.112899 containerd[1607]: time="2026-01-14T23:42:50.112343840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 23:42:50.112899 containerd[1607]: time="2026-01-14T23:42:50.112522480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 23:42:50.112899 containerd[1607]: time="2026-01-14T23:42:50.112555160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 23:42:50.112899 containerd[1607]: time="2026-01-14T23:42:50.112566800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.112577200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113657280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113703360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113719960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113731800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113744840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113804000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113822880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113834040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113855280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113864640Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113888640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 23:42:50.116606 containerd[1607]: time="2026-01-14T23:42:50.113902840Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 23:42:50.119247 coreos-metadata[1669]: Jan 14 23:42:50.114 INFO Fetch successful Jan 14 23:42:50.117752 unknown[1669]: wrote ssh authorized keys file for user: core Jan 14 23:42:50.119487 containerd[1607]: time="2026-01-14T23:42:50.116776560Z" level=info msg="runtime interface created" Jan 14 23:42:50.119487 containerd[1607]: time="2026-01-14T23:42:50.116800640Z" level=info msg="created NRI interface" Jan 14 23:42:50.119487 containerd[1607]: time="2026-01-14T23:42:50.116816760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 23:42:50.119487 containerd[1607]: time="2026-01-14T23:42:50.118499120Z" level=info msg="Connect containerd service" Jan 14 23:42:50.122597 containerd[1607]: time="2026-01-14T23:42:50.119609040Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 23:42:50.126613 containerd[1607]: time="2026-01-14T23:42:50.126031520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:42:50.135284 systemd-logind[1587]: New seat seat0. Jan 14 23:42:50.155324 systemd-logind[1587]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 23:42:50.155348 systemd-logind[1587]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 14 23:42:50.155646 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 23:42:50.199483 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:42:50.202651 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 23:42:50.210155 systemd[1]: Finished sshkeys.service. Jan 14 23:42:50.313559 locksmithd[1630]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 23:42:50.338622 containerd[1607]: time="2026-01-14T23:42:50.338501400Z" level=info msg="Start subscribing containerd event" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338826640Z" level=info msg="Start recovering state" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338936800Z" level=info msg="Start event monitor" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338971800Z" level=info msg="Start cni network conf syncer for default" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338980320Z" level=info msg="Start streaming server" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338991040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.338998200Z" level=info msg="runtime interface starting up..." Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.339003920Z" level=info msg="starting plugins..." Jan 14 23:42:50.339611 containerd[1607]: time="2026-01-14T23:42:50.339018680Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 23:42:50.341082 containerd[1607]: time="2026-01-14T23:42:50.341055160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 23:42:50.341234 containerd[1607]: time="2026-01-14T23:42:50.341212080Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 23:42:50.342732 containerd[1607]: time="2026-01-14T23:42:50.341552840Z" level=info msg="containerd successfully booted in 0.346580s" Jan 14 23:42:50.341779 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 23:42:50.452654 tar[1594]: linux-arm64/README.md Jan 14 23:42:50.473647 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 23:42:50.881833 systemd-networkd[1498]: eth1: Gained IPv6LL Jan 14 23:42:50.882875 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 14 23:42:50.888876 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 23:42:50.890956 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 23:42:50.895799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:42:50.897915 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 23:42:50.938774 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 23:42:51.010697 systemd-networkd[1498]: eth0: Gained IPv6LL Jan 14 23:42:51.011674 systemd-timesyncd[1472]: Network configuration changed, trying to establish connection. Jan 14 23:42:51.248896 sshd_keygen[1617]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 23:42:51.279705 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 23:42:51.284965 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 23:42:51.304231 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 23:42:51.304515 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 23:42:51.308208 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 23:42:51.327944 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 23:42:51.330720 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 23:42:51.334078 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 23:42:51.335825 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 23:42:51.678405 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:42:51.680615 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 23:42:51.682340 systemd[1]: Startup finished in 1.770s (kernel) + 6.833s (initrd) + 4.771s (userspace) = 13.375s. Jan 14 23:42:51.692059 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:42:52.174225 kubelet[1734]: E0114 23:42:52.174147 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:42:52.178881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:42:52.179463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:42:52.181676 systemd[1]: kubelet.service: Consumed 853ms CPU time, 256.2M memory peak. Jan 14 23:43:02.429731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 23:43:02.433804 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:02.599491 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:02.611124 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:02.674686 kubelet[1753]: E0114 23:43:02.674639 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:02.678972 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:02.679195 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:02.680103 systemd[1]: kubelet.service: Consumed 191ms CPU time, 106.1M memory peak. Jan 14 23:43:12.930418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 23:43:12.933211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:13.079855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:13.093208 (kubelet)[1768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:13.136507 kubelet[1768]: E0114 23:43:13.136438 1768 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:13.140121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:13.140415 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:13.141837 systemd[1]: kubelet.service: Consumed 160ms CPU time, 107.2M memory peak. Jan 14 23:43:19.767782 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 23:43:19.770861 systemd[1]: Started sshd@0-49.13.216.16:22-68.220.241.50:44374.service - OpenSSH per-connection server daemon (68.220.241.50:44374). Jan 14 23:43:20.345660 sshd[1776]: Accepted publickey for core from 68.220.241.50 port 44374 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:20.347897 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:20.357462 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 23:43:20.358692 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 23:43:20.365423 systemd-logind[1587]: New session 1 of user core. Jan 14 23:43:20.382228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 23:43:20.387759 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 23:43:20.405520 (systemd)[1781]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 23:43:20.409226 systemd-logind[1587]: New session c1 of user core. Jan 14 23:43:20.553954 systemd[1781]: Queued start job for default target default.target. Jan 14 23:43:20.565261 systemd[1781]: Created slice app.slice - User Application Slice. Jan 14 23:43:20.565312 systemd[1781]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 23:43:20.565333 systemd[1781]: Reached target paths.target - Paths. Jan 14 23:43:20.565400 systemd[1781]: Reached target timers.target - Timers. Jan 14 23:43:20.567263 systemd[1781]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 23:43:20.570790 systemd[1781]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 23:43:20.581571 systemd[1781]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 23:43:20.581677 systemd[1781]: Reached target sockets.target - Sockets. Jan 14 23:43:20.583329 systemd[1781]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 23:43:20.583396 systemd[1781]: Reached target basic.target - Basic System. Jan 14 23:43:20.583444 systemd[1781]: Reached target default.target - Main User Target. Jan 14 23:43:20.583468 systemd[1781]: Startup finished in 166ms. Jan 14 23:43:20.584129 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 23:43:20.589157 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 23:43:20.906385 systemd[1]: Started sshd@1-49.13.216.16:22-68.220.241.50:44376.service - OpenSSH per-connection server daemon (68.220.241.50:44376). Jan 14 23:43:21.156650 systemd-timesyncd[1472]: Contacted time server 144.76.76.107:123 (2.flatcar.pool.ntp.org). Jan 14 23:43:21.156767 systemd-timesyncd[1472]: Initial clock synchronization to Wed 2026-01-14 23:43:20.770943 UTC. Jan 14 23:43:21.458647 sshd[1794]: Accepted publickey for core from 68.220.241.50 port 44376 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:21.459877 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:21.465929 systemd-logind[1587]: New session 2 of user core. Jan 14 23:43:21.479146 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 23:43:21.758008 sshd[1797]: Connection closed by 68.220.241.50 port 44376 Jan 14 23:43:21.760847 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:21.766444 systemd[1]: sshd@1-49.13.216.16:22-68.220.241.50:44376.service: Deactivated successfully. Jan 14 23:43:21.769459 systemd[1]: session-2.scope: Deactivated successfully. Jan 14 23:43:21.770856 systemd-logind[1587]: Session 2 logged out. Waiting for processes to exit. Jan 14 23:43:21.773178 systemd-logind[1587]: Removed session 2. Jan 14 23:43:21.878246 systemd[1]: Started sshd@2-49.13.216.16:22-68.220.241.50:44388.service - OpenSSH per-connection server daemon (68.220.241.50:44388). Jan 14 23:43:22.426915 sshd[1803]: Accepted publickey for core from 68.220.241.50 port 44388 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:22.428914 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:22.436315 systemd-logind[1587]: New session 3 of user core. Jan 14 23:43:22.449951 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 23:43:22.719319 sshd[1806]: Connection closed by 68.220.241.50 port 44388 Jan 14 23:43:22.720389 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:22.727634 systemd[1]: sshd@2-49.13.216.16:22-68.220.241.50:44388.service: Deactivated successfully. Jan 14 23:43:22.731167 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 23:43:22.732458 systemd-logind[1587]: Session 3 logged out. Waiting for processes to exit. Jan 14 23:43:22.733947 systemd-logind[1587]: Removed session 3. Jan 14 23:43:22.825286 systemd[1]: Started sshd@3-49.13.216.16:22-68.220.241.50:44394.service - OpenSSH per-connection server daemon (68.220.241.50:44394). Jan 14 23:43:23.243320 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 23:43:23.246182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:23.347284 sshd[1812]: Accepted publickey for core from 68.220.241.50 port 44394 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:23.350289 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:23.361695 systemd-logind[1587]: New session 4 of user core. Jan 14 23:43:23.367943 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 23:43:23.416219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:23.427176 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:23.475500 kubelet[1824]: E0114 23:43:23.475454 1824 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:23.478533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:23.478749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:23.479423 systemd[1]: kubelet.service: Consumed 173ms CPU time, 105.2M memory peak. Jan 14 23:43:23.636480 sshd[1818]: Connection closed by 68.220.241.50 port 44394 Jan 14 23:43:23.637148 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:23.644501 systemd[1]: sshd@3-49.13.216.16:22-68.220.241.50:44394.service: Deactivated successfully. Jan 14 23:43:23.648045 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 23:43:23.649108 systemd-logind[1587]: Session 4 logged out. Waiting for processes to exit. Jan 14 23:43:23.650987 systemd-logind[1587]: Removed session 4. Jan 14 23:43:23.749447 systemd[1]: Started sshd@4-49.13.216.16:22-68.220.241.50:37690.service - OpenSSH per-connection server daemon (68.220.241.50:37690). Jan 14 23:43:24.264314 sshd[1836]: Accepted publickey for core from 68.220.241.50 port 37690 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:24.266706 sshd-session[1836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:24.273050 systemd-logind[1587]: New session 5 of user core. Jan 14 23:43:24.279946 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 23:43:24.462635 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 23:43:24.462914 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:24.479043 sudo[1840]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:24.573610 sshd[1839]: Connection closed by 68.220.241.50 port 37690 Jan 14 23:43:24.572916 sshd-session[1836]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:24.580059 systemd[1]: sshd@4-49.13.216.16:22-68.220.241.50:37690.service: Deactivated successfully. Jan 14 23:43:24.584022 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 23:43:24.585747 systemd-logind[1587]: Session 5 logged out. Waiting for processes to exit. Jan 14 23:43:24.586792 systemd-logind[1587]: Removed session 5. Jan 14 23:43:24.691269 systemd[1]: Started sshd@5-49.13.216.16:22-68.220.241.50:37700.service - OpenSSH per-connection server daemon (68.220.241.50:37700). Jan 14 23:43:25.220677 sshd[1846]: Accepted publickey for core from 68.220.241.50 port 37700 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:25.222345 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:25.228646 systemd-logind[1587]: New session 6 of user core. Jan 14 23:43:25.241969 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 23:43:25.413677 sudo[1851]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 23:43:25.413939 sudo[1851]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:25.420605 sudo[1851]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:25.429794 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 23:43:25.430078 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:25.445464 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:43:25.488000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:43:25.489795 augenrules[1873]: No rules Jan 14 23:43:25.491136 kernel: kauditd_printk_skb: 179 callbacks suppressed Jan 14 23:43:25.491224 kernel: audit: type=1305 audit(1768434205.488:226): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:43:25.488000 audit[1873]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeb2323e0 a2=420 a3=0 items=0 ppid=1854 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:25.492760 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:43:25.493645 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:43:25.494607 kernel: audit: type=1300 audit(1768434205.488:226): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffeb2323e0 a2=420 a3=0 items=0 ppid=1854 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:25.496826 kernel: audit: type=1327 audit(1768434205.488:226): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:43:25.496891 kernel: audit: type=1130 audit(1768434205.491:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.488000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:43:25.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.496685 sudo[1850]: pam_unix(sudo:session): session closed for user root Jan 14 23:43:25.498424 kernel: audit: type=1131 audit(1768434205.491:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.499804 kernel: audit: type=1106 audit(1768434205.494:229): pid=1850 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.494000 audit[1850]: USER_END pid=1850 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.501387 kernel: audit: type=1104 audit(1768434205.495:230): pid=1850 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.495000 audit[1850]: CRED_DISP pid=1850 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.591614 sshd[1849]: Connection closed by 68.220.241.50 port 37700 Jan 14 23:43:25.590346 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Jan 14 23:43:25.590000 audit[1846]: USER_END pid=1846 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.591000 audit[1846]: CRED_DISP pid=1846 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.602326 kernel: audit: type=1106 audit(1768434205.590:231): pid=1846 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.602446 kernel: audit: type=1104 audit(1768434205.591:232): pid=1846 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:25.598033 systemd[1]: sshd@5-49.13.216.16:22-68.220.241.50:37700.service: Deactivated successfully. Jan 14 23:43:25.600378 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 23:43:25.601761 systemd-logind[1587]: Session 6 logged out. Waiting for processes to exit. Jan 14 23:43:25.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-49.13.216.16:22-68.220.241.50:37700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.605063 kernel: audit: type=1131 audit(1768434205.597:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-49.13.216.16:22-68.220.241.50:37700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.604850 systemd-logind[1587]: Removed session 6. Jan 14 23:43:25.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-49.13.216.16:22-68.220.241.50:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:25.695305 systemd[1]: Started sshd@6-49.13.216.16:22-68.220.241.50:37704.service - OpenSSH per-connection server daemon (68.220.241.50:37704). Jan 14 23:43:26.207000 audit[1882]: USER_ACCT pid=1882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:26.207833 sshd[1882]: Accepted publickey for core from 68.220.241.50 port 37704 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:43:26.208000 audit[1882]: CRED_ACQ pid=1882 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:26.208000 audit[1882]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc686020 a2=3 a3=0 items=0 ppid=1 pid=1882 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:26.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:43:26.209963 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:43:26.216388 systemd-logind[1587]: New session 7 of user core. Jan 14 23:43:26.226923 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 23:43:26.230000 audit[1882]: USER_START pid=1882 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:26.231000 audit[1885]: CRED_ACQ pid=1885 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:43:26.401000 audit[1886]: USER_ACCT pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.402786 sudo[1886]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 23:43:26.402000 audit[1886]: CRED_REFR pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.403478 sudo[1886]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:43:26.405000 audit[1886]: USER_START pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:43:26.707881 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 23:43:26.729108 (dockerd)[1903]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 23:43:26.965647 dockerd[1903]: time="2026-01-14T23:43:26.965271293Z" level=info msg="Starting up" Jan 14 23:43:26.969839 dockerd[1903]: time="2026-01-14T23:43:26.969684117Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 23:43:26.983367 dockerd[1903]: time="2026-01-14T23:43:26.983303907Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 23:43:27.001373 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1841627819-merged.mount: Deactivated successfully. Jan 14 23:43:27.012338 systemd[1]: var-lib-docker-metacopy\x2dcheck4230821148-merged.mount: Deactivated successfully. Jan 14 23:43:27.023404 dockerd[1903]: time="2026-01-14T23:43:27.023289735Z" level=info msg="Loading containers: start." Jan 14 23:43:27.041637 kernel: Initializing XFRM netlink socket Jan 14 23:43:27.098000 audit[1953]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.098000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe3cdfa70 a2=0 a3=0 items=0 ppid=1903 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:43:27.100000 audit[1955]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.100000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff1b4b2e0 a2=0 a3=0 items=0 ppid=1903 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:43:27.102000 audit[1957]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.102000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa8a2aa0 a2=0 a3=0 items=0 ppid=1903 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:43:27.104000 audit[1959]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.104000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff08d9790 a2=0 a3=0 items=0 ppid=1903 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:43:27.107000 audit[1961]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.107000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffca5df8c0 a2=0 a3=0 items=0 ppid=1903 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.107000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:43:27.109000 audit[1963]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.109000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffce75610 a2=0 a3=0 items=0 ppid=1903 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.109000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:27.111000 audit[1965]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.111000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4f133e0 a2=0 a3=0 items=0 ppid=1903 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:27.113000 audit[1967]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.113000 audit[1967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc4786190 a2=0 a3=0 items=0 ppid=1903 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:43:27.144000 audit[1970]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.144000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffcf788190 a2=0 a3=0 items=0 ppid=1903 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 23:43:27.147000 audit[1972]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.147000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd81df840 a2=0 a3=0 items=0 ppid=1903 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:43:27.150000 audit[1974]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.150000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff50eceb0 a2=0 a3=0 items=0 ppid=1903 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.150000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:43:27.152000 audit[1976]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.152000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffde03cbb0 a2=0 a3=0 items=0 ppid=1903 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:27.154000 audit[1978]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.154000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffec12f880 a2=0 a3=0 items=0 ppid=1903 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:43:27.193000 audit[2008]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.193000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff6452360 a2=0 a3=0 items=0 ppid=1903 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.193000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:43:27.195000 audit[2010]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.195000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff85ddd10 a2=0 a3=0 items=0 ppid=1903 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.195000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:43:27.197000 audit[2012]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.197000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8d6e6c0 a2=0 a3=0 items=0 ppid=1903 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.197000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:43:27.198000 audit[2014]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.198000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc74726e0 a2=0 a3=0 items=0 ppid=1903 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:43:27.200000 audit[2016]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.200000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe6d6c00 a2=0 a3=0 items=0 ppid=1903 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.200000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:43:27.203000 audit[2018]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.203000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc8441370 a2=0 a3=0 items=0 ppid=1903 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.203000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:27.205000 audit[2020]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.205000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe5b7c730 a2=0 a3=0 items=0 ppid=1903 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.205000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:27.207000 audit[2022]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.207000 audit[2022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff8772dd0 a2=0 a3=0 items=0 ppid=1903 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.207000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:43:27.210000 audit[2024]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.210000 audit[2024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffcfa0bf60 a2=0 a3=0 items=0 ppid=1903 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.210000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 23:43:27.212000 audit[2026]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.212000 audit[2026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe2d9d310 a2=0 a3=0 items=0 ppid=1903 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.212000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:43:27.216000 audit[2028]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.216000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe669eb40 a2=0 a3=0 items=0 ppid=1903 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.216000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:43:27.219000 audit[2030]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.219000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc15c8b90 a2=0 a3=0 items=0 ppid=1903 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.219000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:43:27.222000 audit[2032]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.222000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcddd70a0 a2=0 a3=0 items=0 ppid=1903 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.222000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:43:27.229000 audit[2037]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.229000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd15fdba0 a2=0 a3=0 items=0 ppid=1903 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:43:27.233000 audit[2039]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.233000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc5b57ff0 a2=0 a3=0 items=0 ppid=1903 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.233000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:43:27.235000 audit[2041]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.235000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdb0c3cc0 a2=0 a3=0 items=0 ppid=1903 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.235000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:43:27.237000 audit[2043]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.237000 audit[2043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0fcc200 a2=0 a3=0 items=0 ppid=1903 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.237000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:43:27.239000 audit[2045]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.239000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffcd57a170 a2=0 a3=0 items=0 ppid=1903 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.239000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:43:27.241000 audit[2047]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:27.241000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe83a2a60 a2=0 a3=0 items=0 ppid=1903 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.241000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:43:27.268000 audit[2051]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.268000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd6dbb4d0 a2=0 a3=0 items=0 ppid=1903 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.268000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 23:43:27.270000 audit[2053]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.270000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff66b1db0 a2=0 a3=0 items=0 ppid=1903 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 23:43:27.277000 audit[2061]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.277000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffddab4180 a2=0 a3=0 items=0 ppid=1903 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.277000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 23:43:27.286000 audit[2067]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.286000 audit[2067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe4ae88d0 a2=0 a3=0 items=0 ppid=1903 pid=2067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.286000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 23:43:27.289000 audit[2069]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2069 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.289000 audit[2069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe13a8e10 a2=0 a3=0 items=0 ppid=1903 pid=2069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 23:43:27.291000 audit[2071]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.291000 audit[2071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcb219900 a2=0 a3=0 items=0 ppid=1903 pid=2071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.291000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 23:43:27.294000 audit[2073]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.294000 audit[2073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcd24d060 a2=0 a3=0 items=0 ppid=1903 pid=2073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:43:27.296000 audit[2075]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:27.296000 audit[2075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffdef0120 a2=0 a3=0 items=0 ppid=1903 pid=2075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:27.296000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 23:43:27.298259 systemd-networkd[1498]: docker0: Link UP Jan 14 23:43:27.302378 dockerd[1903]: time="2026-01-14T23:43:27.302329914Z" level=info msg="Loading containers: done." Jan 14 23:43:27.325651 dockerd[1903]: time="2026-01-14T23:43:27.325538077Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 23:43:27.325930 dockerd[1903]: time="2026-01-14T23:43:27.325689793Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 23:43:27.325990 dockerd[1903]: time="2026-01-14T23:43:27.325956670Z" level=info msg="Initializing buildkit" Jan 14 23:43:27.354570 dockerd[1903]: time="2026-01-14T23:43:27.354522514Z" level=info msg="Completed buildkit initialization" Jan 14 23:43:27.365277 dockerd[1903]: time="2026-01-14T23:43:27.365025380Z" level=info msg="Daemon has completed initialization" Jan 14 23:43:27.365992 dockerd[1903]: time="2026-01-14T23:43:27.365547587Z" level=info msg="API listen on /run/docker.sock" Jan 14 23:43:27.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:27.367537 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 23:43:27.999044 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck205294273-merged.mount: Deactivated successfully. Jan 14 23:43:28.368432 containerd[1607]: time="2026-01-14T23:43:28.368374528Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 23:43:29.022646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3060158300.mount: Deactivated successfully. Jan 14 23:43:30.312015 containerd[1607]: time="2026-01-14T23:43:30.311898792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:30.313866 containerd[1607]: time="2026-01-14T23:43:30.313807067Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 14 23:43:30.315977 containerd[1607]: time="2026-01-14T23:43:30.315923046Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:30.319184 containerd[1607]: time="2026-01-14T23:43:30.319131906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:30.320614 containerd[1607]: time="2026-01-14T23:43:30.320229902Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.951810066s" Jan 14 23:43:30.320614 containerd[1607]: time="2026-01-14T23:43:30.320275298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 14 23:43:30.321183 containerd[1607]: time="2026-01-14T23:43:30.321158470Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 23:43:31.747427 containerd[1607]: time="2026-01-14T23:43:31.746596311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.748820 containerd[1607]: time="2026-01-14T23:43:31.748793927Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 14 23:43:31.749662 containerd[1607]: time="2026-01-14T23:43:31.749637961Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.753476 containerd[1607]: time="2026-01-14T23:43:31.753444014Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:31.754379 containerd[1607]: time="2026-01-14T23:43:31.754341384Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.43238725s" Jan 14 23:43:31.754469 containerd[1607]: time="2026-01-14T23:43:31.754455703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 14 23:43:31.755103 containerd[1607]: time="2026-01-14T23:43:31.755065855Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 23:43:33.105611 containerd[1607]: time="2026-01-14T23:43:33.105541875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.106969 containerd[1607]: time="2026-01-14T23:43:33.106710844Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 14 23:43:33.107843 containerd[1607]: time="2026-01-14T23:43:33.107807228Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.111045 containerd[1607]: time="2026-01-14T23:43:33.110995430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:33.112413 containerd[1607]: time="2026-01-14T23:43:33.112367482Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.357249593s" Jan 14 23:43:33.112481 containerd[1607]: time="2026-01-14T23:43:33.112423178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 14 23:43:33.113006 containerd[1607]: time="2026-01-14T23:43:33.112968185Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 23:43:33.502797 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 23:43:33.506216 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:33.703130 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:33.706513 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 23:43:33.706640 kernel: audit: type=1130 audit(1768434213.702:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:33.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:33.718104 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:33.769970 kubelet[2189]: E0114 23:43:33.769814 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:33.773609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:33.773946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:33.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:33.774997 systemd[1]: kubelet.service: Consumed 170ms CPU time, 105.1M memory peak. Jan 14 23:43:33.778610 kernel: audit: type=1131 audit(1768434213.774:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:34.082991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479474842.mount: Deactivated successfully. Jan 14 23:43:34.612020 containerd[1607]: time="2026-01-14T23:43:34.611964331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:34.613331 containerd[1607]: time="2026-01-14T23:43:34.613226166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 14 23:43:34.613658 containerd[1607]: time="2026-01-14T23:43:34.613627196Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:34.615924 containerd[1607]: time="2026-01-14T23:43:34.615855637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:34.617024 containerd[1607]: time="2026-01-14T23:43:34.616960054Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.503960609s" Jan 14 23:43:34.617024 containerd[1607]: time="2026-01-14T23:43:34.616996814Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 14 23:43:34.617716 containerd[1607]: time="2026-01-14T23:43:34.617655162Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 23:43:35.256705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271165100.mount: Deactivated successfully. Jan 14 23:43:35.306881 update_engine[1588]: I20260114 23:43:35.306774 1588 update_attempter.cc:509] Updating boot flags... Jan 14 23:43:35.973207 containerd[1607]: time="2026-01-14T23:43:35.973139206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.974639 containerd[1607]: time="2026-01-14T23:43:35.974561069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 14 23:43:35.976626 containerd[1607]: time="2026-01-14T23:43:35.975566062Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.979986 containerd[1607]: time="2026-01-14T23:43:35.979947022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:35.981469 containerd[1607]: time="2026-01-14T23:43:35.981416874Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.36373397s" Jan 14 23:43:35.981469 containerd[1607]: time="2026-01-14T23:43:35.981463990Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 14 23:43:35.982926 containerd[1607]: time="2026-01-14T23:43:35.982873558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 23:43:36.528662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946220582.mount: Deactivated successfully. Jan 14 23:43:36.535695 containerd[1607]: time="2026-01-14T23:43:36.535632148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:36.537279 containerd[1607]: time="2026-01-14T23:43:36.537196915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:43:36.539270 containerd[1607]: time="2026-01-14T23:43:36.539202621Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:36.541468 containerd[1607]: time="2026-01-14T23:43:36.541379207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:43:36.542937 containerd[1607]: time="2026-01-14T23:43:36.542634093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.6973ms" Jan 14 23:43:36.542937 containerd[1607]: time="2026-01-14T23:43:36.542766423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 23:43:36.543363 containerd[1607]: time="2026-01-14T23:43:36.543335443Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 23:43:37.130884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2028761747.mount: Deactivated successfully. Jan 14 23:43:39.293650 containerd[1607]: time="2026-01-14T23:43:39.293325322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.296012 containerd[1607]: time="2026-01-14T23:43:39.295954449Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 14 23:43:39.296977 containerd[1607]: time="2026-01-14T23:43:39.296927621Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.300643 containerd[1607]: time="2026-01-14T23:43:39.300528607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:43:39.302205 containerd[1607]: time="2026-01-14T23:43:39.302164225Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.758787339s" Jan 14 23:43:39.302282 containerd[1607]: time="2026-01-14T23:43:39.302212543Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 14 23:43:43.786282 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 23:43:43.790937 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:43.937845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:43.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:43.943603 kernel: audit: type=1130 audit(1768434223.937:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:43.946055 (kubelet)[2362]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:43:43.989864 kubelet[2362]: E0114 23:43:43.989813 2362 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:43:43.991977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:43:43.992106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:43:43.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:43.994285 systemd[1]: kubelet.service: Consumed 153ms CPU time, 106.6M memory peak. Jan 14 23:43:43.994616 kernel: audit: type=1131 audit(1768434223.992:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:45.526245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:45.526423 systemd[1]: kubelet.service: Consumed 153ms CPU time, 106.6M memory peak. Jan 14 23:43:45.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:45.532638 kernel: audit: type=1130 audit(1768434225.525:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:45.532712 kernel: audit: type=1131 audit(1768434225.525:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:45.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:45.530853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:45.561989 systemd[1]: Reload requested from client PID 2377 ('systemctl') (unit session-7.scope)... Jan 14 23:43:45.562054 systemd[1]: Reloading... Jan 14 23:43:45.675612 zram_generator::config[2427]: No configuration found. Jan 14 23:43:45.887105 systemd[1]: Reloading finished in 324 ms. Jan 14 23:43:45.926620 kernel: audit: type=1334 audit(1768434225.921:290): prog-id=61 op=LOAD Jan 14 23:43:45.926719 kernel: audit: type=1334 audit(1768434225.921:291): prog-id=57 op=UNLOAD Jan 14 23:43:45.926742 kernel: audit: type=1334 audit(1768434225.924:292): prog-id=62 op=LOAD Jan 14 23:43:45.921000 audit: BPF prog-id=61 op=LOAD Jan 14 23:43:45.921000 audit: BPF prog-id=57 op=UNLOAD Jan 14 23:43:45.924000 audit: BPF prog-id=62 op=LOAD Jan 14 23:43:45.924000 audit: BPF prog-id=58 op=UNLOAD Jan 14 23:43:45.924000 audit: BPF prog-id=63 op=LOAD Jan 14 23:43:45.927956 kernel: audit: type=1334 audit(1768434225.924:293): prog-id=58 op=UNLOAD Jan 14 23:43:45.927994 kernel: audit: type=1334 audit(1768434225.924:294): prog-id=63 op=LOAD Jan 14 23:43:45.928013 kernel: audit: type=1334 audit(1768434225.924:295): prog-id=64 op=LOAD Jan 14 23:43:45.924000 audit: BPF prog-id=64 op=LOAD Jan 14 23:43:45.924000 audit: BPF prog-id=59 op=UNLOAD Jan 14 23:43:45.924000 audit: BPF prog-id=60 op=UNLOAD Jan 14 23:43:45.925000 audit: BPF prog-id=65 op=LOAD Jan 14 23:43:45.925000 audit: BPF prog-id=56 op=UNLOAD Jan 14 23:43:45.929000 audit: BPF prog-id=66 op=LOAD Jan 14 23:43:45.929000 audit: BPF prog-id=50 op=UNLOAD Jan 14 23:43:45.930000 audit: BPF prog-id=67 op=LOAD Jan 14 23:43:45.930000 audit: BPF prog-id=68 op=LOAD Jan 14 23:43:45.930000 audit: BPF prog-id=54 op=UNLOAD Jan 14 23:43:45.930000 audit: BPF prog-id=55 op=UNLOAD Jan 14 23:43:45.932000 audit: BPF prog-id=69 op=LOAD Jan 14 23:43:45.932000 audit: BPF prog-id=44 op=UNLOAD Jan 14 23:43:45.932000 audit: BPF prog-id=70 op=LOAD Jan 14 23:43:45.933000 audit: BPF prog-id=71 op=LOAD Jan 14 23:43:45.933000 audit: BPF prog-id=45 op=UNLOAD Jan 14 23:43:45.933000 audit: BPF prog-id=46 op=UNLOAD Jan 14 23:43:45.934000 audit: BPF prog-id=72 op=LOAD Jan 14 23:43:45.934000 audit: BPF prog-id=47 op=UNLOAD Jan 14 23:43:45.934000 audit: BPF prog-id=73 op=LOAD Jan 14 23:43:45.934000 audit: BPF prog-id=74 op=LOAD Jan 14 23:43:45.934000 audit: BPF prog-id=48 op=UNLOAD Jan 14 23:43:45.934000 audit: BPF prog-id=49 op=UNLOAD Jan 14 23:43:45.935000 audit: BPF prog-id=75 op=LOAD Jan 14 23:43:45.942000 audit: BPF prog-id=51 op=UNLOAD Jan 14 23:43:45.942000 audit: BPF prog-id=76 op=LOAD Jan 14 23:43:45.942000 audit: BPF prog-id=77 op=LOAD Jan 14 23:43:45.942000 audit: BPF prog-id=52 op=UNLOAD Jan 14 23:43:45.942000 audit: BPF prog-id=53 op=UNLOAD Jan 14 23:43:45.944000 audit: BPF prog-id=78 op=LOAD Jan 14 23:43:45.944000 audit: BPF prog-id=41 op=UNLOAD Jan 14 23:43:45.944000 audit: BPF prog-id=79 op=LOAD Jan 14 23:43:45.944000 audit: BPF prog-id=80 op=LOAD Jan 14 23:43:45.944000 audit: BPF prog-id=42 op=UNLOAD Jan 14 23:43:45.944000 audit: BPF prog-id=43 op=UNLOAD Jan 14 23:43:45.958825 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 23:43:45.958916 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 23:43:45.960670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:45.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:43:45.960738 systemd[1]: kubelet.service: Consumed 108ms CPU time, 95.3M memory peak. Jan 14 23:43:45.962541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:46.119109 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:46.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:46.129954 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:43:46.183494 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:46.183494 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:43:46.183494 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:46.183494 kubelet[2472]: I0114 23:43:46.182709 2472 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:43:46.845673 kubelet[2472]: I0114 23:43:46.845594 2472 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 23:43:46.845673 kubelet[2472]: I0114 23:43:46.845638 2472 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:43:46.846060 kubelet[2472]: I0114 23:43:46.846024 2472 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 23:43:46.877249 kubelet[2472]: E0114 23:43:46.877202 2472 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://49.13.216.16:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:46.879340 kubelet[2472]: I0114 23:43:46.878967 2472 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:43:46.887043 kubelet[2472]: I0114 23:43:46.886980 2472 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:43:46.890703 kubelet[2472]: I0114 23:43:46.890656 2472 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:43:46.891859 kubelet[2472]: I0114 23:43:46.891793 2472 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:43:46.892130 kubelet[2472]: I0114 23:43:46.891851 2472 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-abf6d467b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:43:46.892389 kubelet[2472]: I0114 23:43:46.892197 2472 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:43:46.892389 kubelet[2472]: I0114 23:43:46.892208 2472 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 23:43:46.892454 kubelet[2472]: I0114 23:43:46.892409 2472 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:46.897691 kubelet[2472]: I0114 23:43:46.897343 2472 kubelet.go:446] "Attempting to sync node with API server" Jan 14 23:43:46.897691 kubelet[2472]: I0114 23:43:46.897377 2472 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:43:46.897691 kubelet[2472]: I0114 23:43:46.897407 2472 kubelet.go:352] "Adding apiserver pod source" Jan 14 23:43:46.897691 kubelet[2472]: I0114 23:43:46.897423 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:43:46.901747 kubelet[2472]: I0114 23:43:46.901712 2472 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:43:46.902464 kubelet[2472]: I0114 23:43:46.902429 2472 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 23:43:46.902595 kubelet[2472]: W0114 23:43:46.902567 2472 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 23:43:46.903506 kubelet[2472]: I0114 23:43:46.903457 2472 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:43:46.903506 kubelet[2472]: I0114 23:43:46.903501 2472 server.go:1287] "Started kubelet" Jan 14 23:43:46.903738 kubelet[2472]: W0114 23:43:46.903685 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://49.13.216.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 49.13.216.16:6443: connect: connection refused Jan 14 23:43:46.903788 kubelet[2472]: E0114 23:43:46.903751 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://49.13.216.16:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:46.903853 kubelet[2472]: W0114 23:43:46.903824 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://49.13.216.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-abf6d467b1&limit=500&resourceVersion=0": dial tcp 49.13.216.16:6443: connect: connection refused Jan 14 23:43:46.903880 kubelet[2472]: E0114 23:43:46.903857 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://49.13.216.16:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-abf6d467b1&limit=500&resourceVersion=0\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:46.906739 kubelet[2472]: I0114 23:43:46.906700 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:43:46.909000 audit[2483]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.909000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde2d0be0 a2=0 a3=0 items=0 ppid=2472 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.909000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:43:46.911000 audit[2484]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.911000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff56d0480 a2=0 a3=0 items=0 ppid=2472 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.912690 kubelet[2472]: I0114 23:43:46.912449 2472 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:43:46.911000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:43:46.913625 kubelet[2472]: I0114 23:43:46.913604 2472 server.go:479] "Adding debug handlers to kubelet server" Jan 14 23:43:46.914396 kubelet[2472]: I0114 23:43:46.914354 2472 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:43:46.914717 kubelet[2472]: E0114 23:43:46.914689 2472 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" Jan 14 23:43:46.915000 audit[2486]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.915000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff64c4c10 a2=0 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.915000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:43:46.916340 kubelet[2472]: I0114 23:43:46.916188 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:43:46.916644 kubelet[2472]: I0114 23:43:46.916617 2472 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:43:46.917702 kubelet[2472]: I0114 23:43:46.917683 2472 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:43:46.917829 kubelet[2472]: I0114 23:43:46.917817 2472 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:43:46.918000 audit[2488]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.918000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe0944480 a2=0 a3=0 items=0 ppid=2472 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:43:46.921434 kubelet[2472]: I0114 23:43:46.921362 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:43:46.923332 kubelet[2472]: E0114 23:43:46.922401 2472 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://49.13.216.16:6443/api/v1/namespaces/default/events\": dial tcp 49.13.216.16:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-n-abf6d467b1.188abd8f4f39763e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-n-abf6d467b1,UID:ci-4515-1-0-n-abf6d467b1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-abf6d467b1,},FirstTimestamp:2026-01-14 23:43:46.90347987 +0000 UTC m=+0.765242417,LastTimestamp:2026-01-14 23:43:46.90347987 +0000 UTC m=+0.765242417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-abf6d467b1,}" Jan 14 23:43:46.923332 kubelet[2472]: E0114 23:43:46.923285 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.216.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-abf6d467b1?timeout=10s\": dial tcp 49.13.216.16:6443: connect: connection refused" interval="200ms" Jan 14 23:43:46.923513 kubelet[2472]: I0114 23:43:46.923493 2472 factory.go:221] Registration of the systemd container factory successfully Jan 14 23:43:46.924864 kubelet[2472]: I0114 23:43:46.923622 2472 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:43:46.927613 kubelet[2472]: W0114 23:43:46.926109 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.216.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.216.16:6443: connect: connection refused Jan 14 23:43:46.927613 kubelet[2472]: E0114 23:43:46.926163 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.216.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:46.930627 kubelet[2472]: I0114 23:43:46.929789 2472 factory.go:221] Registration of the containerd container factory successfully Jan 14 23:43:46.931000 audit[2491]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.931000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff706a0d0 a2=0 a3=0 items=0 ppid=2472 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 23:43:46.933683 kubelet[2472]: I0114 23:43:46.933646 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 23:43:46.933000 audit[2493]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:46.933000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd70c3bc0 a2=0 a3=0 items=0 ppid=2472 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:43:46.934844 kubelet[2472]: I0114 23:43:46.934826 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 23:43:46.934949 kubelet[2472]: I0114 23:43:46.934936 2472 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 23:43:46.935022 kubelet[2472]: I0114 23:43:46.935013 2472 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:43:46.935069 kubelet[2472]: I0114 23:43:46.935062 2472 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 23:43:46.935155 kubelet[2472]: E0114 23:43:46.935139 2472 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:43:46.935000 audit[2494]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.935000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcb5a00e0 a2=0 a3=0 items=0 ppid=2472 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.935000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:43:46.936000 audit[2495]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.936000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeee719a0 a2=0 a3=0 items=0 ppid=2472 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.936000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:43:46.938000 audit[2496]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:43:46.938000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffac98790 a2=0 a3=0 items=0 ppid=2472 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.938000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:43:46.939000 audit[2497]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:46.939000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffeed82c70 a2=0 a3=0 items=0 ppid=2472 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:43:46.940000 audit[2498]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:46.940000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd854a090 a2=0 a3=0 items=0 ppid=2472 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:43:46.941000 audit[2499]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:43:46.941000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0eb0d20 a2=0 a3=0 items=0 ppid=2472 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:46.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:43:46.943777 kubelet[2472]: W0114 23:43:46.943718 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://49.13.216.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 49.13.216.16:6443: connect: connection refused Jan 14 23:43:46.943856 kubelet[2472]: E0114 23:43:46.943790 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://49.13.216.16:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:46.953686 kubelet[2472]: E0114 23:43:46.953630 2472 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:43:46.966519 kubelet[2472]: I0114 23:43:46.966468 2472 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:43:46.966519 kubelet[2472]: I0114 23:43:46.966491 2472 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:43:46.966519 kubelet[2472]: I0114 23:43:46.966514 2472 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:46.968659 kubelet[2472]: I0114 23:43:46.968621 2472 policy_none.go:49] "None policy: Start" Jan 14 23:43:46.968659 kubelet[2472]: I0114 23:43:46.968651 2472 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:43:46.968659 kubelet[2472]: I0114 23:43:46.968664 2472 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:43:46.975554 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 23:43:46.992212 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 23:43:46.996668 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 23:43:47.009728 kubelet[2472]: I0114 23:43:47.009398 2472 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 23:43:47.009884 kubelet[2472]: I0114 23:43:47.009817 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:43:47.009933 kubelet[2472]: I0114 23:43:47.009840 2472 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:43:47.013525 kubelet[2472]: I0114 23:43:47.013025 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:43:47.014108 kubelet[2472]: E0114 23:43:47.014084 2472 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:43:47.014231 kubelet[2472]: E0114 23:43:47.014216 2472 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-n-abf6d467b1\" not found" Jan 14 23:43:47.050080 systemd[1]: Created slice kubepods-burstable-pod0b06a2fb34ffa97b9dd40565c25d11ab.slice - libcontainer container kubepods-burstable-pod0b06a2fb34ffa97b9dd40565c25d11ab.slice. Jan 14 23:43:47.073869 kubelet[2472]: E0114 23:43:47.073802 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.078987 systemd[1]: Created slice kubepods-burstable-podfeb4cdd236a024cac8c8a6c3cf81e26a.slice - libcontainer container kubepods-burstable-podfeb4cdd236a024cac8c8a6c3cf81e26a.slice. Jan 14 23:43:47.083596 kubelet[2472]: E0114 23:43:47.083558 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.083916 systemd[1]: Created slice kubepods-burstable-pod812228c8ab7c5e39f6ebefdcfa8b1830.slice - libcontainer container kubepods-burstable-pod812228c8ab7c5e39f6ebefdcfa8b1830.slice. Jan 14 23:43:47.087033 kubelet[2472]: E0114 23:43:47.086991 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.112979 kubelet[2472]: I0114 23:43:47.112378 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.113126 kubelet[2472]: E0114 23:43:47.113061 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.216.16:6443/api/v1/nodes\": dial tcp 49.13.216.16:6443: connect: connection refused" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.124329 kubelet[2472]: E0114 23:43:47.124265 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.216.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-abf6d467b1?timeout=10s\": dial tcp 49.13.216.16:6443: connect: connection refused" interval="400ms" Jan 14 23:43:47.219512 kubelet[2472]: I0114 23:43:47.219388 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.219512 kubelet[2472]: I0114 23:43:47.219493 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220094 kubelet[2472]: I0114 23:43:47.219539 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220094 kubelet[2472]: I0114 23:43:47.219609 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220094 kubelet[2472]: I0114 23:43:47.219651 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220094 kubelet[2472]: I0114 23:43:47.219687 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220094 kubelet[2472]: I0114 23:43:47.219724 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220294 kubelet[2472]: I0114 23:43:47.219760 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b06a2fb34ffa97b9dd40565c25d11ab-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-abf6d467b1\" (UID: \"0b06a2fb34ffa97b9dd40565c25d11ab\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.220294 kubelet[2472]: I0114 23:43:47.219821 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.316735 kubelet[2472]: I0114 23:43:47.315997 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.316735 kubelet[2472]: E0114 23:43:47.316527 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.216.16:6443/api/v1/nodes\": dial tcp 49.13.216.16:6443: connect: connection refused" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.376825 containerd[1607]: time="2026-01-14T23:43:47.376446687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-abf6d467b1,Uid:0b06a2fb34ffa97b9dd40565c25d11ab,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:47.385929 containerd[1607]: time="2026-01-14T23:43:47.385667409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-abf6d467b1,Uid:feb4cdd236a024cac8c8a6c3cf81e26a,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:47.389237 containerd[1607]: time="2026-01-14T23:43:47.389200077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-abf6d467b1,Uid:812228c8ab7c5e39f6ebefdcfa8b1830,Namespace:kube-system,Attempt:0,}" Jan 14 23:43:47.409643 containerd[1607]: time="2026-01-14T23:43:47.409485083Z" level=info msg="connecting to shim a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144" address="unix:///run/containerd/s/2cbf274969b8dad80fb81ac5da2aa9c379efe68e910e7224ef55cfe467873409" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:47.435423 containerd[1607]: time="2026-01-14T23:43:47.435336432Z" level=info msg="connecting to shim 524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c" address="unix:///run/containerd/s/8798a9e4cbcc03621928ef5296067b0ab949a73a54ffc1392dd51ea2f79da8cf" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:47.443615 containerd[1607]: time="2026-01-14T23:43:47.443191614Z" level=info msg="connecting to shim 68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04" address="unix:///run/containerd/s/1a6009986cf38bfc4108b968607fbf82fbdff7867d8fdb4643e4eeaa2f7d48cd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:43:47.468932 systemd[1]: Started cri-containerd-a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144.scope - libcontainer container a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144. Jan 14 23:43:47.475540 systemd[1]: Started cri-containerd-524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c.scope - libcontainer container 524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c. Jan 14 23:43:47.490390 systemd[1]: Started cri-containerd-68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04.scope - libcontainer container 68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04. Jan 14 23:43:47.494000 audit: BPF prog-id=81 op=LOAD Jan 14 23:43:47.495000 audit: BPF prog-id=82 op=LOAD Jan 14 23:43:47.495000 audit[2534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.495000 audit: BPF prog-id=82 op=UNLOAD Jan 14 23:43:47.495000 audit[2534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.495000 audit: BPF prog-id=83 op=LOAD Jan 14 23:43:47.495000 audit[2534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.495000 audit: BPF prog-id=84 op=LOAD Jan 14 23:43:47.495000 audit[2534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.496000 audit: BPF prog-id=84 op=UNLOAD Jan 14 23:43:47.496000 audit[2534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.496000 audit: BPF prog-id=83 op=UNLOAD Jan 14 23:43:47.496000 audit[2534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.496000 audit: BPF prog-id=85 op=LOAD Jan 14 23:43:47.496000 audit[2534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2512 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6132646630623933633231313331393936353730363530636636666165 Jan 14 23:43:47.505000 audit: BPF prog-id=86 op=LOAD Jan 14 23:43:47.506000 audit: BPF prog-id=87 op=LOAD Jan 14 23:43:47.506000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.506000 audit: BPF prog-id=87 op=UNLOAD Jan 14 23:43:47.506000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.506000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.508000 audit: BPF prog-id=88 op=LOAD Jan 14 23:43:47.508000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.508000 audit: BPF prog-id=89 op=LOAD Jan 14 23:43:47.508000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.508000 audit: BPF prog-id=89 op=UNLOAD Jan 14 23:43:47.508000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.508000 audit: BPF prog-id=88 op=UNLOAD Jan 14 23:43:47.508000 audit[2572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.508000 audit: BPF prog-id=90 op=LOAD Jan 14 23:43:47.508000 audit[2572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2549 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.508000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638613334363032376532656664303966363737396161393436383164 Jan 14 23:43:47.509000 audit: BPF prog-id=91 op=LOAD Jan 14 23:43:47.509000 audit: BPF prog-id=92 op=LOAD Jan 14 23:43:47.509000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.509000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=92 op=UNLOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=93 op=LOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=94 op=LOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=94 op=UNLOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=93 op=UNLOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.510000 audit: BPF prog-id=95 op=LOAD Jan 14 23:43:47.510000 audit[2569]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2535 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.510000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532343932356133643736363631343335626666626130653165653862 Jan 14 23:43:47.526633 kubelet[2472]: E0114 23:43:47.526187 2472 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://49.13.216.16:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-abf6d467b1?timeout=10s\": dial tcp 49.13.216.16:6443: connect: connection refused" interval="800ms" Jan 14 23:43:47.552865 containerd[1607]: time="2026-01-14T23:43:47.552680388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-abf6d467b1,Uid:feb4cdd236a024cac8c8a6c3cf81e26a,Namespace:kube-system,Attempt:0,} returns sandbox id \"524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c\"" Jan 14 23:43:47.558253 containerd[1607]: time="2026-01-14T23:43:47.558210353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-abf6d467b1,Uid:812228c8ab7c5e39f6ebefdcfa8b1830,Namespace:kube-system,Attempt:0,} returns sandbox id \"68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04\"" Jan 14 23:43:47.560450 containerd[1607]: time="2026-01-14T23:43:47.560391217Z" level=info msg="CreateContainer within sandbox \"524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 23:43:47.561131 containerd[1607]: time="2026-01-14T23:43:47.561094931Z" level=info msg="CreateContainer within sandbox \"68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 23:43:47.572430 containerd[1607]: time="2026-01-14T23:43:47.572280847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-abf6d467b1,Uid:0b06a2fb34ffa97b9dd40565c25d11ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144\"" Jan 14 23:43:47.575365 containerd[1607]: time="2026-01-14T23:43:47.575308979Z" level=info msg="CreateContainer within sandbox \"a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 23:43:47.577649 containerd[1607]: time="2026-01-14T23:43:47.576657668Z" level=info msg="Container 196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.579515 containerd[1607]: time="2026-01-14T23:43:47.579454955Z" level=info msg="Container b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.591420 containerd[1607]: time="2026-01-14T23:43:47.591383079Z" level=info msg="CreateContainer within sandbox \"68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb\"" Jan 14 23:43:47.592571 containerd[1607]: time="2026-01-14T23:43:47.592537901Z" level=info msg="StartContainer for \"196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb\"" Jan 14 23:43:47.593762 containerd[1607]: time="2026-01-14T23:43:47.593715842Z" level=info msg="connecting to shim 196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb" address="unix:///run/containerd/s/1a6009986cf38bfc4108b968607fbf82fbdff7867d8fdb4643e4eeaa2f7d48cd" protocol=ttrpc version=3 Jan 14 23:43:47.598542 containerd[1607]: time="2026-01-14T23:43:47.598381808Z" level=info msg="Container 02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:43:47.599260 containerd[1607]: time="2026-01-14T23:43:47.598529635Z" level=info msg="CreateContainer within sandbox \"524925a3d76661435bffba0e1ee8b88f373ba8384c5503822e51879dc8361c6c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1\"" Jan 14 23:43:47.600550 containerd[1607]: time="2026-01-14T23:43:47.600000635Z" level=info msg="StartContainer for \"b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1\"" Jan 14 23:43:47.601834 containerd[1607]: time="2026-01-14T23:43:47.601798195Z" level=info msg="connecting to shim b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1" address="unix:///run/containerd/s/8798a9e4cbcc03621928ef5296067b0ab949a73a54ffc1392dd51ea2f79da8cf" protocol=ttrpc version=3 Jan 14 23:43:47.614036 containerd[1607]: time="2026-01-14T23:43:47.613646935Z" level=info msg="CreateContainer within sandbox \"a2df0b93c21131996570650cf6faec09749b0b5c4c4e414321b117d66ea5c144\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e\"" Jan 14 23:43:47.614162 containerd[1607]: time="2026-01-14T23:43:47.614137454Z" level=info msg="StartContainer for \"02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e\"" Jan 14 23:43:47.615389 containerd[1607]: time="2026-01-14T23:43:47.615354170Z" level=info msg="connecting to shim 02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e" address="unix:///run/containerd/s/2cbf274969b8dad80fb81ac5da2aa9c379efe68e910e7224ef55cfe467873409" protocol=ttrpc version=3 Jan 14 23:43:47.615855 systemd[1]: Started cri-containerd-196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb.scope - libcontainer container 196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb. Jan 14 23:43:47.638000 audit: BPF prog-id=96 op=LOAD Jan 14 23:43:47.640000 audit: BPF prog-id=97 op=LOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.640000 audit: BPF prog-id=97 op=UNLOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.640000 audit: BPF prog-id=98 op=LOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.640000 audit: BPF prog-id=99 op=LOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.640000 audit: BPF prog-id=99 op=UNLOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.640000 audit: BPF prog-id=98 op=UNLOAD Jan 14 23:43:47.640000 audit[2639]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.640000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.641000 audit: BPF prog-id=100 op=LOAD Jan 14 23:43:47.641000 audit[2639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2549 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.641000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3139363832316561636136396164383139663839386130643731663865 Jan 14 23:43:47.648063 systemd[1]: Started cri-containerd-02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e.scope - libcontainer container 02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e. Jan 14 23:43:47.650348 systemd[1]: Started cri-containerd-b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1.scope - libcontainer container b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1. Jan 14 23:43:47.679000 audit: BPF prog-id=101 op=LOAD Jan 14 23:43:47.680000 audit: BPF prog-id=102 op=LOAD Jan 14 23:43:47.680000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.680000 audit: BPF prog-id=102 op=UNLOAD Jan 14 23:43:47.680000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.680000 audit: BPF prog-id=103 op=LOAD Jan 14 23:43:47.680000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.680000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.681000 audit: BPF prog-id=104 op=LOAD Jan 14 23:43:47.681000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.681000 audit: BPF prog-id=104 op=UNLOAD Jan 14 23:43:47.681000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.681000 audit: BPF prog-id=103 op=UNLOAD Jan 14 23:43:47.681000 audit[2659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.681000 audit: BPF prog-id=105 op=LOAD Jan 14 23:43:47.681000 audit[2659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2512 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.681000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032393639633866303362333261353135383161356630353230383666 Jan 14 23:43:47.684000 audit: BPF prog-id=106 op=LOAD Jan 14 23:43:47.685658 containerd[1607]: time="2026-01-14T23:43:47.685382511Z" level=info msg="StartContainer for \"196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb\" returns successfully" Jan 14 23:43:47.685000 audit: BPF prog-id=107 op=LOAD Jan 14 23:43:47.685000 audit[2651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.685000 audit: BPF prog-id=107 op=UNLOAD Jan 14 23:43:47.685000 audit[2651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.685000 audit: BPF prog-id=108 op=LOAD Jan 14 23:43:47.685000 audit[2651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.686000 audit: BPF prog-id=109 op=LOAD Jan 14 23:43:47.686000 audit[2651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.686000 audit: BPF prog-id=109 op=UNLOAD Jan 14 23:43:47.686000 audit[2651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.686000 audit: BPF prog-id=108 op=UNLOAD Jan 14 23:43:47.686000 audit[2651]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.686000 audit: BPF prog-id=110 op=LOAD Jan 14 23:43:47.686000 audit[2651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2535 pid=2651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:43:47.686000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232613331663463643764663639356534623037313038336663636264 Jan 14 23:43:47.720951 kubelet[2472]: I0114 23:43:47.720811 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.721474 kubelet[2472]: E0114 23:43:47.721444 2472 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://49.13.216.16:6443/api/v1/nodes\": dial tcp 49.13.216.16:6443: connect: connection refused" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.748028 containerd[1607]: time="2026-01-14T23:43:47.747947040Z" level=info msg="StartContainer for \"b2a31f4cd7df695e4b071083fccbdaffa502ea740e75b2a000cda5ffe7be67a1\" returns successfully" Jan 14 23:43:47.752825 containerd[1607]: time="2026-01-14T23:43:47.752774569Z" level=info msg="StartContainer for \"02969c8f03b32a51581a5f052086f2612928b999be2ede607047b9a899f0bc5e\" returns successfully" Jan 14 23:43:47.821207 kubelet[2472]: W0114 23:43:47.821145 2472 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://49.13.216.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 49.13.216.16:6443: connect: connection refused Jan 14 23:43:47.821320 kubelet[2472]: E0114 23:43:47.821231 2472 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://49.13.216.16:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 49.13.216.16:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:43:47.963076 kubelet[2472]: E0114 23:43:47.962972 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.968617 kubelet[2472]: E0114 23:43:47.968251 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:47.972232 kubelet[2472]: E0114 23:43:47.972208 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:48.525355 kubelet[2472]: I0114 23:43:48.525315 2472 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:48.974404 kubelet[2472]: E0114 23:43:48.974120 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:48.974649 kubelet[2472]: E0114 23:43:48.974575 2472 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.798625 kubelet[2472]: E0114 23:43:49.797850 2472 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-n-abf6d467b1\" not found" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.859131 kubelet[2472]: I0114 23:43:49.859041 2472 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.859131 kubelet[2472]: E0114 23:43:49.859091 2472 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-abf6d467b1\": node \"ci-4515-1-0-n-abf6d467b1\" not found" Jan 14 23:43:49.902447 kubelet[2472]: I0114 23:43:49.901965 2472 apiserver.go:52] "Watching apiserver" Jan 14 23:43:49.915436 kubelet[2472]: I0114 23:43:49.915388 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.918600 kubelet[2472]: I0114 23:43:49.918534 2472 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:43:49.930414 kubelet[2472]: E0114 23:43:49.930347 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.930414 kubelet[2472]: I0114 23:43:49.930400 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.933606 kubelet[2472]: E0114 23:43:49.933468 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-abf6d467b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.933606 kubelet[2472]: I0114 23:43:49.933502 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:49.935898 kubelet[2472]: E0114 23:43:49.935862 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:50.256551 kubelet[2472]: I0114 23:43:50.255681 2472 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:50.260389 kubelet[2472]: E0114 23:43:50.260353 2472 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-abf6d467b1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:51.586212 systemd[1]: Reload requested from client PID 2739 ('systemctl') (unit session-7.scope)... Jan 14 23:43:51.586228 systemd[1]: Reloading... Jan 14 23:43:51.696638 zram_generator::config[2785]: No configuration found. Jan 14 23:43:51.938541 systemd[1]: Reloading finished in 351 ms. Jan 14 23:43:51.965078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:51.982153 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 23:43:51.982654 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:51.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:51.986364 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 14 23:43:51.986467 kernel: audit: type=1131 audit(1768434231.982:392): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:51.986411 systemd[1]: kubelet.service: Consumed 1.175s CPU time, 125.9M memory peak. Jan 14 23:43:51.990289 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:43:51.990000 audit: BPF prog-id=111 op=LOAD Jan 14 23:43:51.992618 kernel: audit: type=1334 audit(1768434231.990:393): prog-id=111 op=LOAD Jan 14 23:43:51.990000 audit: BPF prog-id=69 op=UNLOAD Jan 14 23:43:51.990000 audit: BPF prog-id=112 op=LOAD Jan 14 23:43:51.994617 kernel: audit: type=1334 audit(1768434231.990:394): prog-id=69 op=UNLOAD Jan 14 23:43:51.994697 kernel: audit: type=1334 audit(1768434231.990:395): prog-id=112 op=LOAD Jan 14 23:43:51.994719 kernel: audit: type=1334 audit(1768434231.990:396): prog-id=113 op=LOAD Jan 14 23:43:51.990000 audit: BPF prog-id=113 op=LOAD Jan 14 23:43:51.990000 audit: BPF prog-id=70 op=UNLOAD Jan 14 23:43:51.990000 audit: BPF prog-id=71 op=UNLOAD Jan 14 23:43:51.995987 kernel: audit: type=1334 audit(1768434231.990:397): prog-id=70 op=UNLOAD Jan 14 23:43:51.996279 kernel: audit: type=1334 audit(1768434231.990:398): prog-id=71 op=UNLOAD Jan 14 23:43:51.991000 audit: BPF prog-id=114 op=LOAD Jan 14 23:43:51.997794 kernel: audit: type=1334 audit(1768434231.991:399): prog-id=114 op=LOAD Jan 14 23:43:51.997846 kernel: audit: type=1334 audit(1768434231.991:400): prog-id=78 op=UNLOAD Jan 14 23:43:51.991000 audit: BPF prog-id=78 op=UNLOAD Jan 14 23:43:51.992000 audit: BPF prog-id=115 op=LOAD Jan 14 23:43:51.998743 kernel: audit: type=1334 audit(1768434231.992:401): prog-id=115 op=LOAD Jan 14 23:43:51.992000 audit: BPF prog-id=116 op=LOAD Jan 14 23:43:51.992000 audit: BPF prog-id=79 op=UNLOAD Jan 14 23:43:51.992000 audit: BPF prog-id=80 op=UNLOAD Jan 14 23:43:51.994000 audit: BPF prog-id=117 op=LOAD Jan 14 23:43:51.994000 audit: BPF prog-id=61 op=UNLOAD Jan 14 23:43:51.994000 audit: BPF prog-id=118 op=LOAD Jan 14 23:43:51.994000 audit: BPF prog-id=75 op=UNLOAD Jan 14 23:43:51.995000 audit: BPF prog-id=119 op=LOAD Jan 14 23:43:51.995000 audit: BPF prog-id=120 op=LOAD Jan 14 23:43:51.995000 audit: BPF prog-id=76 op=UNLOAD Jan 14 23:43:51.995000 audit: BPF prog-id=77 op=UNLOAD Jan 14 23:43:51.995000 audit: BPF prog-id=121 op=LOAD Jan 14 23:43:52.000000 audit: BPF prog-id=66 op=UNLOAD Jan 14 23:43:52.001000 audit: BPF prog-id=122 op=LOAD Jan 14 23:43:52.001000 audit: BPF prog-id=123 op=LOAD Jan 14 23:43:52.001000 audit: BPF prog-id=67 op=UNLOAD Jan 14 23:43:52.001000 audit: BPF prog-id=68 op=UNLOAD Jan 14 23:43:52.002000 audit: BPF prog-id=124 op=LOAD Jan 14 23:43:52.002000 audit: BPF prog-id=72 op=UNLOAD Jan 14 23:43:52.003000 audit: BPF prog-id=125 op=LOAD Jan 14 23:43:52.003000 audit: BPF prog-id=126 op=LOAD Jan 14 23:43:52.003000 audit: BPF prog-id=73 op=UNLOAD Jan 14 23:43:52.003000 audit: BPF prog-id=74 op=UNLOAD Jan 14 23:43:52.003000 audit: BPF prog-id=127 op=LOAD Jan 14 23:43:52.003000 audit: BPF prog-id=65 op=UNLOAD Jan 14 23:43:52.006000 audit: BPF prog-id=128 op=LOAD Jan 14 23:43:52.006000 audit: BPF prog-id=62 op=UNLOAD Jan 14 23:43:52.006000 audit: BPF prog-id=129 op=LOAD Jan 14 23:43:52.006000 audit: BPF prog-id=130 op=LOAD Jan 14 23:43:52.006000 audit: BPF prog-id=63 op=UNLOAD Jan 14 23:43:52.006000 audit: BPF prog-id=64 op=UNLOAD Jan 14 23:43:52.161387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:43:52.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:43:52.171959 (kubelet)[2831]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:43:52.236161 kubelet[2831]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:52.236161 kubelet[2831]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:43:52.236161 kubelet[2831]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:43:52.236161 kubelet[2831]: I0114 23:43:52.235504 2831 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:43:52.247719 kubelet[2831]: I0114 23:43:52.247558 2831 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 23:43:52.247719 kubelet[2831]: I0114 23:43:52.247642 2831 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:43:52.248028 kubelet[2831]: I0114 23:43:52.247993 2831 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 23:43:52.249573 kubelet[2831]: I0114 23:43:52.249544 2831 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 23:43:52.252076 kubelet[2831]: I0114 23:43:52.252019 2831 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:43:52.257810 kubelet[2831]: I0114 23:43:52.257776 2831 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:43:52.262283 kubelet[2831]: I0114 23:43:52.261870 2831 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:43:52.262283 kubelet[2831]: I0114 23:43:52.262075 2831 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:43:52.262516 kubelet[2831]: I0114 23:43:52.262096 2831 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-abf6d467b1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:43:52.262674 kubelet[2831]: I0114 23:43:52.262658 2831 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:43:52.262725 kubelet[2831]: I0114 23:43:52.262717 2831 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 23:43:52.262885 kubelet[2831]: I0114 23:43:52.262869 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:52.263097 kubelet[2831]: I0114 23:43:52.263082 2831 kubelet.go:446] "Attempting to sync node with API server" Jan 14 23:43:52.263171 kubelet[2831]: I0114 23:43:52.263161 2831 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:43:52.263238 kubelet[2831]: I0114 23:43:52.263229 2831 kubelet.go:352] "Adding apiserver pod source" Jan 14 23:43:52.263291 kubelet[2831]: I0114 23:43:52.263282 2831 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:43:52.270319 kubelet[2831]: I0114 23:43:52.270300 2831 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:43:52.271127 kubelet[2831]: I0114 23:43:52.271098 2831 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 23:43:52.272493 kubelet[2831]: I0114 23:43:52.272045 2831 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:43:52.272493 kubelet[2831]: I0114 23:43:52.272080 2831 server.go:1287] "Started kubelet" Jan 14 23:43:52.272738 kubelet[2831]: I0114 23:43:52.272695 2831 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:43:52.273022 kubelet[2831]: I0114 23:43:52.273003 2831 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:43:52.273160 kubelet[2831]: I0114 23:43:52.273134 2831 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:43:52.275544 kubelet[2831]: I0114 23:43:52.275521 2831 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:43:52.283827 kubelet[2831]: I0114 23:43:52.283788 2831 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:43:52.287397 kubelet[2831]: I0114 23:43:52.276377 2831 server.go:479] "Adding debug handlers to kubelet server" Jan 14 23:43:52.288402 kubelet[2831]: I0114 23:43:52.288379 2831 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:43:52.289727 kubelet[2831]: E0114 23:43:52.289703 2831 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-abf6d467b1\" not found" Jan 14 23:43:52.290279 kubelet[2831]: I0114 23:43:52.290261 2831 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:43:52.291639 kubelet[2831]: I0114 23:43:52.291570 2831 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:43:52.297914 kubelet[2831]: I0114 23:43:52.297866 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 23:43:52.299002 kubelet[2831]: I0114 23:43:52.298979 2831 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 23:43:52.299107 kubelet[2831]: I0114 23:43:52.299097 2831 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 23:43:52.299168 kubelet[2831]: I0114 23:43:52.299159 2831 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:43:52.299216 kubelet[2831]: I0114 23:43:52.299208 2831 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 23:43:52.299302 kubelet[2831]: E0114 23:43:52.299286 2831 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:43:52.305572 kubelet[2831]: I0114 23:43:52.305539 2831 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:43:52.306827 kubelet[2831]: E0114 23:43:52.306805 2831 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:43:52.311449 kubelet[2831]: I0114 23:43:52.311422 2831 factory.go:221] Registration of the containerd container factory successfully Jan 14 23:43:52.311642 kubelet[2831]: I0114 23:43:52.311561 2831 factory.go:221] Registration of the systemd container factory successfully Jan 14 23:43:52.359635 kubelet[2831]: I0114 23:43:52.359614 2831 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:43:52.359792 kubelet[2831]: I0114 23:43:52.359777 2831 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:43:52.359881 kubelet[2831]: I0114 23:43:52.359872 2831 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:43:52.360092 kubelet[2831]: I0114 23:43:52.360076 2831 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 23:43:52.360608 kubelet[2831]: I0114 23:43:52.360147 2831 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 23:43:52.360608 kubelet[2831]: I0114 23:43:52.360173 2831 policy_none.go:49] "None policy: Start" Jan 14 23:43:52.360608 kubelet[2831]: I0114 23:43:52.360184 2831 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:43:52.360608 kubelet[2831]: I0114 23:43:52.360195 2831 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:43:52.360882 kubelet[2831]: I0114 23:43:52.360863 2831 state_mem.go:75] "Updated machine memory state" Jan 14 23:43:52.365988 kubelet[2831]: I0114 23:43:52.365410 2831 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 23:43:52.366601 kubelet[2831]: I0114 23:43:52.366570 2831 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:43:52.366724 kubelet[2831]: I0114 23:43:52.366690 2831 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:43:52.367619 kubelet[2831]: I0114 23:43:52.367231 2831 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:43:52.369094 kubelet[2831]: E0114 23:43:52.369066 2831 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:43:52.400646 kubelet[2831]: I0114 23:43:52.400426 2831 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.401196 kubelet[2831]: I0114 23:43:52.401166 2831 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.402016 kubelet[2831]: I0114 23:43:52.401983 2831 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.471282 kubelet[2831]: I0114 23:43:52.471226 2831 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.482604 kubelet[2831]: I0114 23:43:52.482541 2831 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.482878 kubelet[2831]: I0114 23:43:52.482730 2831 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.494959 kubelet[2831]: I0114 23:43:52.493045 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.494959 kubelet[2831]: I0114 23:43:52.493084 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.494959 kubelet[2831]: I0114 23:43:52.493102 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.494959 kubelet[2831]: I0114 23:43:52.493120 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.494959 kubelet[2831]: I0114 23:43:52.493137 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.495167 kubelet[2831]: I0114 23:43:52.493152 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/812228c8ab7c5e39f6ebefdcfa8b1830-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-abf6d467b1\" (UID: \"812228c8ab7c5e39f6ebefdcfa8b1830\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.495167 kubelet[2831]: I0114 23:43:52.493167 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b06a2fb34ffa97b9dd40565c25d11ab-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-abf6d467b1\" (UID: \"0b06a2fb34ffa97b9dd40565c25d11ab\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.495167 kubelet[2831]: I0114 23:43:52.493181 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:52.495167 kubelet[2831]: I0114 23:43:52.493195 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/feb4cdd236a024cac8c8a6c3cf81e26a-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" (UID: \"feb4cdd236a024cac8c8a6c3cf81e26a\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:53.265552 kubelet[2831]: I0114 23:43:53.265494 2831 apiserver.go:52] "Watching apiserver" Jan 14 23:43:53.292247 kubelet[2831]: I0114 23:43:53.292200 2831 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:43:53.345144 kubelet[2831]: I0114 23:43:53.345101 2831 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:53.346130 kubelet[2831]: I0114 23:43:53.346102 2831 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:53.357864 kubelet[2831]: E0114 23:43:53.357808 2831 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-abf6d467b1\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:53.359217 kubelet[2831]: E0114 23:43:53.359193 2831 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-abf6d467b1\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" Jan 14 23:43:53.392476 kubelet[2831]: I0114 23:43:53.392406 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-abf6d467b1" podStartSLOduration=1.392388474 podStartE2EDuration="1.392388474s" podCreationTimestamp="2026-01-14 23:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:53.377397664 +0000 UTC m=+1.201643944" watchObservedRunningTime="2026-01-14 23:43:53.392388474 +0000 UTC m=+1.216634754" Jan 14 23:43:53.392662 kubelet[2831]: I0114 23:43:53.392548 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-n-abf6d467b1" podStartSLOduration=1.3925428819999999 podStartE2EDuration="1.392542882s" podCreationTimestamp="2026-01-14 23:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:53.391101767 +0000 UTC m=+1.215348047" watchObservedRunningTime="2026-01-14 23:43:53.392542882 +0000 UTC m=+1.216789162" Jan 14 23:43:53.415622 kubelet[2831]: I0114 23:43:53.415340 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" podStartSLOduration=1.415295024 podStartE2EDuration="1.415295024s" podCreationTimestamp="2026-01-14 23:43:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:43:53.405249908 +0000 UTC m=+1.229496188" watchObservedRunningTime="2026-01-14 23:43:53.415295024 +0000 UTC m=+1.239541304" Jan 14 23:43:58.657377 kubelet[2831]: I0114 23:43:58.657253 2831 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 23:43:58.658660 containerd[1607]: time="2026-01-14T23:43:58.658576362Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 23:43:58.660485 kubelet[2831]: I0114 23:43:58.659505 2831 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 23:43:59.403494 systemd[1]: Created slice kubepods-besteffort-pod6f745017_edb8_4cf8_98fa_0f5d2dc25f6f.slice - libcontainer container kubepods-besteffort-pod6f745017_edb8_4cf8_98fa_0f5d2dc25f6f.slice. Jan 14 23:43:59.439116 kubelet[2831]: I0114 23:43:59.438934 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-lib-modules\") pod \"kube-proxy-vffj4\" (UID: \"6f745017-edb8-4cf8-98fa-0f5d2dc25f6f\") " pod="kube-system/kube-proxy-vffj4" Jan 14 23:43:59.439116 kubelet[2831]: I0114 23:43:59.438991 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glcmb\" (UniqueName: \"kubernetes.io/projected/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-kube-api-access-glcmb\") pod \"kube-proxy-vffj4\" (UID: \"6f745017-edb8-4cf8-98fa-0f5d2dc25f6f\") " pod="kube-system/kube-proxy-vffj4" Jan 14 23:43:59.439116 kubelet[2831]: I0114 23:43:59.439024 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-kube-proxy\") pod \"kube-proxy-vffj4\" (UID: \"6f745017-edb8-4cf8-98fa-0f5d2dc25f6f\") " pod="kube-system/kube-proxy-vffj4" Jan 14 23:43:59.439116 kubelet[2831]: I0114 23:43:59.439041 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-xtables-lock\") pod \"kube-proxy-vffj4\" (UID: \"6f745017-edb8-4cf8-98fa-0f5d2dc25f6f\") " pod="kube-system/kube-proxy-vffj4" Jan 14 23:43:59.549369 kubelet[2831]: E0114 23:43:59.549324 2831 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 14 23:43:59.549369 kubelet[2831]: E0114 23:43:59.549364 2831 projected.go:194] Error preparing data for projected volume kube-api-access-glcmb for pod kube-system/kube-proxy-vffj4: configmap "kube-root-ca.crt" not found Jan 14 23:43:59.549628 kubelet[2831]: E0114 23:43:59.549449 2831 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-kube-api-access-glcmb podName:6f745017-edb8-4cf8-98fa-0f5d2dc25f6f nodeName:}" failed. No retries permitted until 2026-01-14 23:44:00.049408219 +0000 UTC m=+7.873654499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-glcmb" (UniqueName: "kubernetes.io/projected/6f745017-edb8-4cf8-98fa-0f5d2dc25f6f-kube-api-access-glcmb") pod "kube-proxy-vffj4" (UID: "6f745017-edb8-4cf8-98fa-0f5d2dc25f6f") : configmap "kube-root-ca.crt" not found Jan 14 23:43:59.775691 kubelet[2831]: I0114 23:43:59.775535 2831 status_manager.go:890] "Failed to get status for pod" podUID="5f454469-e688-4051-bfe1-8e24aba4aafb" pod="tigera-operator/tigera-operator-7dcd859c48-lfmnh" err="pods \"tigera-operator-7dcd859c48-lfmnh\" is forbidden: User \"system:node:ci-4515-1-0-n-abf6d467b1\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4515-1-0-n-abf6d467b1' and this object" Jan 14 23:43:59.776914 systemd[1]: Created slice kubepods-besteffort-pod5f454469_e688_4051_bfe1_8e24aba4aafb.slice - libcontainer container kubepods-besteffort-pod5f454469_e688_4051_bfe1_8e24aba4aafb.slice. Jan 14 23:43:59.842711 kubelet[2831]: I0114 23:43:59.842538 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f454469-e688-4051-bfe1-8e24aba4aafb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lfmnh\" (UID: \"5f454469-e688-4051-bfe1-8e24aba4aafb\") " pod="tigera-operator/tigera-operator-7dcd859c48-lfmnh" Jan 14 23:43:59.842711 kubelet[2831]: I0114 23:43:59.842652 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98xjh\" (UniqueName: \"kubernetes.io/projected/5f454469-e688-4051-bfe1-8e24aba4aafb-kube-api-access-98xjh\") pod \"tigera-operator-7dcd859c48-lfmnh\" (UID: \"5f454469-e688-4051-bfe1-8e24aba4aafb\") " pod="tigera-operator/tigera-operator-7dcd859c48-lfmnh" Jan 14 23:44:00.082573 containerd[1607]: time="2026-01-14T23:44:00.082510047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lfmnh,Uid:5f454469-e688-4051-bfe1-8e24aba4aafb,Namespace:tigera-operator,Attempt:0,}" Jan 14 23:44:00.104187 containerd[1607]: time="2026-01-14T23:44:00.104118381Z" level=info msg="connecting to shim 09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872" address="unix:///run/containerd/s/b0e190d94e4ff433c69c720286df0cd90b94d19c2e6f04c071af3d46372abc0d" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:00.136867 systemd[1]: Started cri-containerd-09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872.scope - libcontainer container 09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872. Jan 14 23:44:00.154940 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 23:44:00.155041 kernel: audit: type=1334 audit(1768434240.153:434): prog-id=131 op=LOAD Jan 14 23:44:00.153000 audit: BPF prog-id=131 op=LOAD Jan 14 23:44:00.156159 kernel: audit: type=1334 audit(1768434240.155:435): prog-id=132 op=LOAD Jan 14 23:44:00.155000 audit: BPF prog-id=132 op=LOAD Jan 14 23:44:00.158562 kernel: audit: type=1300 audit(1768434240.155:435): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.161642 kernel: audit: type=1327 audit(1768434240.155:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.162740 kernel: audit: type=1334 audit(1768434240.155:436): prog-id=132 op=UNLOAD Jan 14 23:44:00.155000 audit: BPF prog-id=132 op=UNLOAD Jan 14 23:44:00.165687 kernel: audit: type=1300 audit(1768434240.155:436): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.168420 kernel: audit: type=1327 audit(1768434240.155:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: BPF prog-id=133 op=LOAD Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.173575 kernel: audit: type=1334 audit(1768434240.155:437): prog-id=133 op=LOAD Jan 14 23:44:00.173719 kernel: audit: type=1300 audit(1768434240.155:437): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.173749 kernel: audit: type=1327 audit(1768434240.155:437): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: BPF prog-id=134 op=LOAD Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: BPF prog-id=134 op=UNLOAD Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: BPF prog-id=133 op=UNLOAD Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.155000 audit: BPF prog-id=135 op=LOAD Jan 14 23:44:00.155000 audit[2894]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2881 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.155000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039626263376336313462346261643734386239393931333964316264 Jan 14 23:44:00.192707 containerd[1607]: time="2026-01-14T23:44:00.192636495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lfmnh,Uid:5f454469-e688-4051-bfe1-8e24aba4aafb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872\"" Jan 14 23:44:00.196384 containerd[1607]: time="2026-01-14T23:44:00.196342400Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 23:44:00.314512 containerd[1607]: time="2026-01-14T23:44:00.314418406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vffj4,Uid:6f745017-edb8-4cf8-98fa-0f5d2dc25f6f,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:00.342691 containerd[1607]: time="2026-01-14T23:44:00.341818181Z" level=info msg="connecting to shim beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a" address="unix:///run/containerd/s/e4b0a2ec90554ff65398dba3a662b73d6a279375a46cb951fed886608cdedf28" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:00.372059 systemd[1]: Started cri-containerd-beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a.scope - libcontainer container beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a. Jan 14 23:44:00.383000 audit: BPF prog-id=136 op=LOAD Jan 14 23:44:00.384000 audit: BPF prog-id=137 op=LOAD Jan 14 23:44:00.384000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.384000 audit: BPF prog-id=137 op=UNLOAD Jan 14 23:44:00.384000 audit[2942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.384000 audit: BPF prog-id=138 op=LOAD Jan 14 23:44:00.384000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.385000 audit: BPF prog-id=139 op=LOAD Jan 14 23:44:00.385000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.385000 audit: BPF prog-id=139 op=UNLOAD Jan 14 23:44:00.385000 audit[2942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.385000 audit: BPF prog-id=138 op=UNLOAD Jan 14 23:44:00.385000 audit[2942]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.385000 audit: BPF prog-id=140 op=LOAD Jan 14 23:44:00.385000 audit[2942]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2931 pid=2942 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.385000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265656131383435653061396164666233333365336438623431623732 Jan 14 23:44:00.403043 containerd[1607]: time="2026-01-14T23:44:00.403002778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vffj4,Uid:6f745017-edb8-4cf8-98fa-0f5d2dc25f6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a\"" Jan 14 23:44:00.407616 containerd[1607]: time="2026-01-14T23:44:00.407555156Z" level=info msg="CreateContainer within sandbox \"beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 23:44:00.418386 containerd[1607]: time="2026-01-14T23:44:00.418343339Z" level=info msg="Container 42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:00.426066 containerd[1607]: time="2026-01-14T23:44:00.426024823Z" level=info msg="CreateContainer within sandbox \"beea1845e0a9adfb333e3d8b41b721d9e965b9cde6073ddb85bc8178cf891b1a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48\"" Jan 14 23:44:00.427926 containerd[1607]: time="2026-01-14T23:44:00.427894460Z" level=info msg="StartContainer for \"42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48\"" Jan 14 23:44:00.431978 containerd[1607]: time="2026-01-14T23:44:00.430877605Z" level=info msg="connecting to shim 42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48" address="unix:///run/containerd/s/e4b0a2ec90554ff65398dba3a662b73d6a279375a46cb951fed886608cdedf28" protocol=ttrpc version=3 Jan 14 23:44:00.454800 systemd[1]: Started cri-containerd-42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48.scope - libcontainer container 42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48. Jan 14 23:44:00.517000 audit: BPF prog-id=141 op=LOAD Jan 14 23:44:00.517000 audit[2967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643234353331383832616565373637353064656238666262616531 Jan 14 23:44:00.517000 audit: BPF prog-id=142 op=LOAD Jan 14 23:44:00.517000 audit[2967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643234353331383832616565373637353064656238666262616531 Jan 14 23:44:00.517000 audit: BPF prog-id=142 op=UNLOAD Jan 14 23:44:00.517000 audit[2967]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643234353331383832616565373637353064656238666262616531 Jan 14 23:44:00.517000 audit: BPF prog-id=141 op=UNLOAD Jan 14 23:44:00.517000 audit[2967]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643234353331383832616565373637353064656238666262616531 Jan 14 23:44:00.517000 audit: BPF prog-id=143 op=LOAD Jan 14 23:44:00.517000 audit[2967]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2931 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432643234353331383832616565373637353064656238666262616531 Jan 14 23:44:00.537735 containerd[1607]: time="2026-01-14T23:44:00.537610114Z" level=info msg="StartContainer for \"42d24531882aee76750deb8fbbae1253a413326eb4a551afd128bf42f162ed48\" returns successfully" Jan 14 23:44:00.695000 audit[3031]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.695000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc81ab5d0 a2=0 a3=1 items=0 ppid=2980 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:44:00.695000 audit[3032]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.695000 audit[3032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce16ca90 a2=0 a3=1 items=0 ppid=2980 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:44:00.703000 audit[3036]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.703000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd191b2c0 a2=0 a3=1 items=0 ppid=2980 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.703000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:44:00.705000 audit[3037]: NETFILTER_CFG table=filter:57 family=10 entries=1 op=nft_register_chain pid=3037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.705000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb8b9750 a2=0 a3=1 items=0 ppid=2980 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.705000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:44:00.706000 audit[3034]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.706000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc41e3340 a2=0 a3=1 items=0 ppid=2980 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:44:00.709000 audit[3038]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.709000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffccde4fb0 a2=0 a3=1 items=0 ppid=2980 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.709000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:44:00.801000 audit[3039]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.801000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff9364a80 a2=0 a3=1 items=0 ppid=2980 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:44:00.804000 audit[3041]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.804000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffbe1ce80 a2=0 a3=1 items=0 ppid=2980 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 23:44:00.808000 audit[3044]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.808000 audit[3044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcf3344e0 a2=0 a3=1 items=0 ppid=2980 pid=3044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 23:44:00.809000 audit[3045]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.809000 audit[3045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2836c40 a2=0 a3=1 items=0 ppid=2980 pid=3045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:44:00.812000 audit[3047]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.812000 audit[3047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc3d83630 a2=0 a3=1 items=0 ppid=2980 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:44:00.813000 audit[3048]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.813000 audit[3048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce453340 a2=0 a3=1 items=0 ppid=2980 pid=3048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:44:00.816000 audit[3050]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.816000 audit[3050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe2ca2540 a2=0 a3=1 items=0 ppid=2980 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:44:00.821000 audit[3053]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.821000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcea61060 a2=0 a3=1 items=0 ppid=2980 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 23:44:00.822000 audit[3054]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.822000 audit[3054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4d860e0 a2=0 a3=1 items=0 ppid=2980 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:44:00.825000 audit[3056]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.825000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9158270 a2=0 a3=1 items=0 ppid=2980 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:44:00.826000 audit[3057]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.826000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc68fb790 a2=0 a3=1 items=0 ppid=2980 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.829000 audit[3059]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.829000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2f23330 a2=0 a3=1 items=0 ppid=2980 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:44:00.832000 audit[3062]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.832000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefb12b00 a2=0 a3=1 items=0 ppid=2980 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:44:00.837000 audit[3065]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.837000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff95ac7c0 a2=0 a3=1 items=0 ppid=2980 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.837000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:44:00.840000 audit[3066]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.840000 audit[3066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe015140 a2=0 a3=1 items=0 ppid=2980 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:44:00.843000 audit[3068]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.843000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe0f8f720 a2=0 a3=1 items=0 ppid=2980 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.843000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.847000 audit[3071]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.847000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff8db2420 a2=0 a3=1 items=0 ppid=2980 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.849000 audit[3072]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.849000 audit[3072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef7044f0 a2=0 a3=1 items=0 ppid=2980 pid=3072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.849000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:44:00.854000 audit[3074]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:00.854000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffceb2eff0 a2=0 a3=1 items=0 ppid=2980 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:44:00.878000 audit[3080]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:00.878000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffd651300 a2=0 a3=1 items=0 ppid=2980 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.878000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.892000 audit[3080]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:00.892000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffd651300 a2=0 a3=1 items=0 ppid=2980 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.895000 audit[3085]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.895000 audit[3085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe2324300 a2=0 a3=1 items=0 ppid=2980 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:44:00.898000 audit[3087]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.898000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff74d5d40 a2=0 a3=1 items=0 ppid=2980 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 23:44:00.902000 audit[3090]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3090 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.902000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcf900290 a2=0 a3=1 items=0 ppid=2980 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 23:44:00.903000 audit[3091]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.903000 audit[3091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2b81300 a2=0 a3=1 items=0 ppid=2980 pid=3091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:44:00.906000 audit[3093]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.906000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc47723b0 a2=0 a3=1 items=0 ppid=2980 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:44:00.907000 audit[3094]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.907000 audit[3094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc86e6de0 a2=0 a3=1 items=0 ppid=2980 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.907000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:44:00.910000 audit[3096]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.910000 audit[3096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc710ce80 a2=0 a3=1 items=0 ppid=2980 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 23:44:00.913000 audit[3099]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.913000 audit[3099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffeeb8d3e0 a2=0 a3=1 items=0 ppid=2980 pid=3099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:44:00.914000 audit[3100]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.914000 audit[3100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2cfe660 a2=0 a3=1 items=0 ppid=2980 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.914000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:44:00.917000 audit[3102]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3102 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.917000 audit[3102]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1f80e60 a2=0 a3=1 items=0 ppid=2980 pid=3102 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:44:00.919000 audit[3103]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.919000 audit[3103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffc7bdc0 a2=0 a3=1 items=0 ppid=2980 pid=3103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.919000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.923000 audit[3105]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.923000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffffd38bb0 a2=0 a3=1 items=0 ppid=2980 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:44:00.926000 audit[3108]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3108 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.926000 audit[3108]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefb25600 a2=0 a3=1 items=0 ppid=2980 pid=3108 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:44:00.930000 audit[3111]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.930000 audit[3111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf744620 a2=0 a3=1 items=0 ppid=2980 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 23:44:00.932000 audit[3112]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.932000 audit[3112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec02d080 a2=0 a3=1 items=0 ppid=2980 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:44:00.934000 audit[3114]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.934000 audit[3114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc8203360 a2=0 a3=1 items=0 ppid=2980 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.938000 audit[3117]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.938000 audit[3117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd62c72a0 a2=0 a3=1 items=0 ppid=2980 pid=3117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:44:00.940000 audit[3118]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.940000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe2160410 a2=0 a3=1 items=0 ppid=2980 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.940000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:44:00.942000 audit[3120]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.942000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff7ad9a10 a2=0 a3=1 items=0 ppid=2980 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:44:00.943000 audit[3121]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.943000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3a4c0b0 a2=0 a3=1 items=0 ppid=2980 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.943000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:44:00.946000 audit[3123]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.946000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff85f6890 a2=0 a3=1 items=0 ppid=2980 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.946000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:44:00.951000 audit[3126]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3126 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:00.951000 audit[3126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff7d1b500 a2=0 a3=1 items=0 ppid=2980 pid=3126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:44:00.955000 audit[3128]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:44:00.955000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe1211f50 a2=0 a3=1 items=0 ppid=2980 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.955000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:00.957000 audit[3128]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:44:00.957000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe1211f50 a2=0 a3=1 items=0 ppid=2980 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:00.957000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:02.861837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579827013.mount: Deactivated successfully. Jan 14 23:44:03.434342 kubelet[2831]: I0114 23:44:03.433720 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vffj4" podStartSLOduration=4.433696664 podStartE2EDuration="4.433696664s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:01.399864844 +0000 UTC m=+9.224111124" watchObservedRunningTime="2026-01-14 23:44:03.433696664 +0000 UTC m=+11.257942984" Jan 14 23:44:06.669157 containerd[1607]: time="2026-01-14T23:44:06.669079755Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:06.670948 containerd[1607]: time="2026-01-14T23:44:06.670863075Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 23:44:06.671839 containerd[1607]: time="2026-01-14T23:44:06.671546732Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:06.674791 containerd[1607]: time="2026-01-14T23:44:06.674737696Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:06.676159 containerd[1607]: time="2026-01-14T23:44:06.676083047Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 6.479537191s" Jan 14 23:44:06.676159 containerd[1607]: time="2026-01-14T23:44:06.676122815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 23:44:06.678935 containerd[1607]: time="2026-01-14T23:44:06.678879490Z" level=info msg="CreateContainer within sandbox \"09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 23:44:06.687325 containerd[1607]: time="2026-01-14T23:44:06.686757838Z" level=info msg="Container b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:06.692198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744811332.mount: Deactivated successfully. Jan 14 23:44:06.704398 containerd[1607]: time="2026-01-14T23:44:06.704350783Z" level=info msg="CreateContainer within sandbox \"09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc\"" Jan 14 23:44:06.705656 containerd[1607]: time="2026-01-14T23:44:06.704975069Z" level=info msg="StartContainer for \"b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc\"" Jan 14 23:44:06.708201 containerd[1607]: time="2026-01-14T23:44:06.708157951Z" level=info msg="connecting to shim b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc" address="unix:///run/containerd/s/b0e190d94e4ff433c69c720286df0cd90b94d19c2e6f04c071af3d46372abc0d" protocol=ttrpc version=3 Jan 14 23:44:06.737015 systemd[1]: Started cri-containerd-b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc.scope - libcontainer container b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc. Jan 14 23:44:06.750000 audit: BPF prog-id=144 op=LOAD Jan 14 23:44:06.752610 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 23:44:06.752658 kernel: audit: type=1334 audit(1768434246.750:506): prog-id=144 op=LOAD Jan 14 23:44:06.751000 audit: BPF prog-id=145 op=LOAD Jan 14 23:44:06.751000 audit[3137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.757939 kernel: audit: type=1334 audit(1768434246.751:507): prog-id=145 op=LOAD Jan 14 23:44:06.758020 kernel: audit: type=1300 audit(1768434246.751:507): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.758037 kernel: audit: type=1327 audit(1768434246.751:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.758056 kernel: audit: type=1334 audit(1768434246.751:508): prog-id=145 op=UNLOAD Jan 14 23:44:06.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.751000 audit: BPF prog-id=145 op=UNLOAD Jan 14 23:44:06.751000 audit[3137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.760498 kernel: audit: type=1300 audit(1768434246.751:508): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.763301 kernel: audit: type=1327 audit(1768434246.751:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.751000 audit: BPF prog-id=146 op=LOAD Jan 14 23:44:06.764058 kernel: audit: type=1334 audit(1768434246.751:509): prog-id=146 op=LOAD Jan 14 23:44:06.751000 audit[3137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.766597 kernel: audit: type=1300 audit(1768434246.751:509): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.751000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.768866 kernel: audit: type=1327 audit(1768434246.751:509): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.753000 audit: BPF prog-id=147 op=LOAD Jan 14 23:44:06.753000 audit[3137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.753000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.755000 audit: BPF prog-id=147 op=UNLOAD Jan 14 23:44:06.755000 audit[3137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.755000 audit: BPF prog-id=146 op=UNLOAD Jan 14 23:44:06.755000 audit[3137]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.755000 audit: BPF prog-id=148 op=LOAD Jan 14 23:44:06.755000 audit[3137]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2881 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:06.755000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6232353030666561363737386463623731616366303233383866643332 Jan 14 23:44:06.789668 containerd[1607]: time="2026-01-14T23:44:06.789511746Z" level=info msg="StartContainer for \"b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc\" returns successfully" Jan 14 23:44:13.059785 sudo[1886]: pam_unix(sudo:session): session closed for user root Jan 14 23:44:13.058000 audit[1886]: USER_END pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.061717 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:44:13.061778 kernel: audit: type=1106 audit(1768434253.058:514): pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.058000 audit[1886]: CRED_DISP pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.065095 kernel: audit: type=1104 audit(1768434253.058:515): pid=1886 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.157958 sshd[1885]: Connection closed by 68.220.241.50 port 37704 Jan 14 23:44:13.158726 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:13.162000 audit[1882]: USER_END pid=1882 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:13.162000 audit[1882]: CRED_DISP pid=1882 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:13.169637 kernel: audit: type=1106 audit(1768434253.162:516): pid=1882 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:13.169727 kernel: audit: type=1104 audit(1768434253.162:517): pid=1882 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:44:13.170964 systemd[1]: sshd@6-49.13.216.16:22-68.220.241.50:37704.service: Deactivated successfully. Jan 14 23:44:13.176422 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 23:44:13.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-49.13.216.16:22-68.220.241.50:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.178466 systemd[1]: session-7.scope: Consumed 7.897s CPU time, 218.1M memory peak. Jan 14 23:44:13.179252 kernel: audit: type=1131 audit(1768434253.173:518): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-49.13.216.16:22-68.220.241.50:37704 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:13.179831 systemd-logind[1587]: Session 7 logged out. Waiting for processes to exit. Jan 14 23:44:13.182003 systemd-logind[1587]: Removed session 7. Jan 14 23:44:15.949000 audit[3214]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.951609 kernel: audit: type=1325 audit(1768434255.949:519): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.949000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb23af60 a2=0 a3=1 items=0 ppid=2980 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.954612 kernel: audit: type=1300 audit(1768434255.949:519): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb23af60 a2=0 a3=1 items=0 ppid=2980 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.954733 kernel: audit: type=1327 audit(1768434255.949:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.957000 audit[3214]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.957000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb23af60 a2=0 a3=1 items=0 ppid=2980 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.962072 kernel: audit: type=1325 audit(1768434255.957:520): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.962138 kernel: audit: type=1300 audit(1768434255.957:520): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb23af60 a2=0 a3=1 items=0 ppid=2980 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.957000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.971000 audit[3216]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.971000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcb5d6b50 a2=0 a3=1 items=0 ppid=2980 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:15.975000 audit[3216]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:15.975000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb5d6b50 a2=0 a3=1 items=0 ppid=2980 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:15.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.902000 audit[3219]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.904554 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 23:44:19.904617 kernel: audit: type=1325 audit(1768434259.902:523): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.902000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe0be0740 a2=0 a3=1 items=0 ppid=2980 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.924625 kernel: audit: type=1300 audit(1768434259.902:523): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe0be0740 a2=0 a3=1 items=0 ppid=2980 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.924705 kernel: audit: type=1327 audit(1768434259.902:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.926000 audit[3219]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.929608 kernel: audit: type=1325 audit(1768434259.926:524): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3219 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.926000 audit[3219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0be0740 a2=0 a3=1 items=0 ppid=2980 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.941607 kernel: audit: type=1300 audit(1768434259.926:524): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe0be0740 a2=0 a3=1 items=0 ppid=2980 pid=3219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.941770 kernel: audit: type=1327 audit(1768434259.926:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.926000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.947000 audit[3221]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.947000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc830f810 a2=0 a3=1 items=0 ppid=2980 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.952423 kernel: audit: type=1325 audit(1768434259.947:525): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.952504 kernel: audit: type=1300 audit(1768434259.947:525): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc830f810 a2=0 a3=1 items=0 ppid=2980 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.947000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.953744 kernel: audit: type=1327 audit(1768434259.947:525): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:19.955000 audit[3221]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.958619 kernel: audit: type=1325 audit(1768434259.955:526): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3221 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:19.955000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc830f810 a2=0 a3=1 items=0 ppid=2980 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:19.955000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:21.000000 audit[3224]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:21.000000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc4893ee0 a2=0 a3=1 items=0 ppid=2980 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.000000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:21.009000 audit[3224]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3224 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:21.009000 audit[3224]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc4893ee0 a2=0 a3=1 items=0 ppid=2980 pid=3224 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:21.009000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:22.805000 audit[3226]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:22.805000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffce3241f0 a2=0 a3=1 items=0 ppid=2980 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:22.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:22.810000 audit[3226]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3226 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:22.810000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffce3241f0 a2=0 a3=1 items=0 ppid=2980 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:22.810000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:22.832715 kubelet[2831]: I0114 23:44:22.832637 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lfmnh" podStartSLOduration=17.350223657 podStartE2EDuration="23.832617941s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="2026-01-14 23:44:00.194444955 +0000 UTC m=+8.018691275" lastFinishedPulling="2026-01-14 23:44:06.676839279 +0000 UTC m=+14.501085559" observedRunningTime="2026-01-14 23:44:07.402217013 +0000 UTC m=+15.226463333" watchObservedRunningTime="2026-01-14 23:44:22.832617941 +0000 UTC m=+30.656864221" Jan 14 23:44:22.843600 systemd[1]: Created slice kubepods-besteffort-podff9c1359_c2af_4cec_a2f7_bc1326c6f32b.slice - libcontainer container kubepods-besteffort-podff9c1359_c2af_4cec_a2f7_bc1326c6f32b.slice. Jan 14 23:44:22.893756 kubelet[2831]: I0114 23:44:22.893697 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ff9c1359-c2af-4cec-a2f7-bc1326c6f32b-typha-certs\") pod \"calico-typha-5d5f549bb-c2fbx\" (UID: \"ff9c1359-c2af-4cec-a2f7-bc1326c6f32b\") " pod="calico-system/calico-typha-5d5f549bb-c2fbx" Jan 14 23:44:22.893997 kubelet[2831]: I0114 23:44:22.893899 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff9c1359-c2af-4cec-a2f7-bc1326c6f32b-tigera-ca-bundle\") pod \"calico-typha-5d5f549bb-c2fbx\" (UID: \"ff9c1359-c2af-4cec-a2f7-bc1326c6f32b\") " pod="calico-system/calico-typha-5d5f549bb-c2fbx" Jan 14 23:44:22.893997 kubelet[2831]: I0114 23:44:22.893945 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59btr\" (UniqueName: \"kubernetes.io/projected/ff9c1359-c2af-4cec-a2f7-bc1326c6f32b-kube-api-access-59btr\") pod \"calico-typha-5d5f549bb-c2fbx\" (UID: \"ff9c1359-c2af-4cec-a2f7-bc1326c6f32b\") " pod="calico-system/calico-typha-5d5f549bb-c2fbx" Jan 14 23:44:22.895000 audit[3228]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:22.895000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffc8a0cb0 a2=0 a3=1 items=0 ppid=2980 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:22.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:22.898000 audit[3228]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3228 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:22.898000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffc8a0cb0 a2=0 a3=1 items=0 ppid=2980 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:22.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:22.982184 systemd[1]: Created slice kubepods-besteffort-podfa493d4d_0ed4_463e_90cf_b139cb85c6f2.slice - libcontainer container kubepods-besteffort-podfa493d4d_0ed4_463e_90cf_b139cb85c6f2.slice. Jan 14 23:44:23.095501 kubelet[2831]: I0114 23:44:23.095362 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-node-certs\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095501 kubelet[2831]: I0114 23:44:23.095421 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-cni-bin-dir\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095501 kubelet[2831]: I0114 23:44:23.095440 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xzs\" (UniqueName: \"kubernetes.io/projected/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-kube-api-access-46xzs\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095501 kubelet[2831]: I0114 23:44:23.095464 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-lib-modules\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095501 kubelet[2831]: I0114 23:44:23.095480 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-var-run-calico\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095732 kubelet[2831]: I0114 23:44:23.095495 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-cni-net-dir\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095732 kubelet[2831]: I0114 23:44:23.095509 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-var-lib-calico\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095732 kubelet[2831]: I0114 23:44:23.095524 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-flexvol-driver-host\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095732 kubelet[2831]: I0114 23:44:23.095540 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-xtables-lock\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095732 kubelet[2831]: I0114 23:44:23.095557 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-cni-log-dir\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095839 kubelet[2831]: I0114 23:44:23.095573 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-policysync\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.095839 kubelet[2831]: I0114 23:44:23.095601 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa493d4d-0ed4-463e-90cf-b139cb85c6f2-tigera-ca-bundle\") pod \"calico-node-d9z5t\" (UID: \"fa493d4d-0ed4-463e-90cf-b139cb85c6f2\") " pod="calico-system/calico-node-d9z5t" Jan 14 23:44:23.151568 containerd[1607]: time="2026-01-14T23:44:23.151438080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5f549bb-c2fbx,Uid:ff9c1359-c2af-4cec-a2f7-bc1326c6f32b,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:23.180122 containerd[1607]: time="2026-01-14T23:44:23.180066026Z" level=info msg="connecting to shim 1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe" address="unix:///run/containerd/s/2f3a3e66da6368c8cc3acfcfb0b157a27e16a62b2ddc1007fafd45b680be51be" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:23.201930 kubelet[2831]: E0114 23:44:23.201820 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.201930 kubelet[2831]: W0114 23:44:23.201853 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.201930 kubelet[2831]: E0114 23:44:23.201896 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.210401 kubelet[2831]: E0114 23:44:23.210089 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.210401 kubelet[2831]: W0114 23:44:23.210121 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.210401 kubelet[2831]: E0114 23:44:23.210181 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.226816 systemd[1]: Started cri-containerd-1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe.scope - libcontainer container 1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe. Jan 14 23:44:23.232259 kubelet[2831]: E0114 23:44:23.232205 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:23.237399 kubelet[2831]: E0114 23:44:23.237357 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.237399 kubelet[2831]: W0114 23:44:23.237383 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.237399 kubelet[2831]: E0114 23:44:23.237402 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.266000 audit: BPF prog-id=149 op=LOAD Jan 14 23:44:23.267000 audit: BPF prog-id=150 op=LOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=150 op=UNLOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=151 op=LOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=152 op=LOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=152 op=UNLOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=151 op=UNLOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.267000 audit: BPF prog-id=153 op=LOAD Jan 14 23:44:23.267000 audit[3252]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3241 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.267000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162656462636137646263653462633962666134666335636634393965 Jan 14 23:44:23.272074 kubelet[2831]: E0114 23:44:23.272039 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.272074 kubelet[2831]: W0114 23:44:23.272068 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.272243 kubelet[2831]: E0114 23:44:23.272106 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.272515 kubelet[2831]: E0114 23:44:23.272490 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.272730 kubelet[2831]: W0114 23:44:23.272644 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.272730 kubelet[2831]: E0114 23:44:23.272727 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.273048 kubelet[2831]: E0114 23:44:23.273025 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.273048 kubelet[2831]: W0114 23:44:23.273044 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.273134 kubelet[2831]: E0114 23:44:23.273055 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.273377 kubelet[2831]: E0114 23:44:23.273338 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.273377 kubelet[2831]: W0114 23:44:23.273356 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.273377 kubelet[2831]: E0114 23:44:23.273369 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.273764 kubelet[2831]: E0114 23:44:23.273730 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.273764 kubelet[2831]: W0114 23:44:23.273750 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.273764 kubelet[2831]: E0114 23:44:23.273765 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.274072 kubelet[2831]: E0114 23:44:23.274028 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.274072 kubelet[2831]: W0114 23:44:23.274047 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.274072 kubelet[2831]: E0114 23:44:23.274060 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.274570 kubelet[2831]: E0114 23:44:23.274545 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.274570 kubelet[2831]: W0114 23:44:23.274565 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.274944 kubelet[2831]: E0114 23:44:23.274576 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.275533 kubelet[2831]: E0114 23:44:23.275508 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.275533 kubelet[2831]: W0114 23:44:23.275529 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.275654 kubelet[2831]: E0114 23:44:23.275541 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.275996 kubelet[2831]: E0114 23:44:23.275978 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.276033 kubelet[2831]: W0114 23:44:23.276014 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.276033 kubelet[2831]: E0114 23:44:23.276030 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.276264 kubelet[2831]: E0114 23:44:23.276246 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.276264 kubelet[2831]: W0114 23:44:23.276263 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.276318 kubelet[2831]: E0114 23:44:23.276273 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.277837 kubelet[2831]: E0114 23:44:23.277809 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.277837 kubelet[2831]: W0114 23:44:23.277829 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.277930 kubelet[2831]: E0114 23:44:23.277841 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.278116 kubelet[2831]: E0114 23:44:23.278003 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.278116 kubelet[2831]: W0114 23:44:23.278013 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.278116 kubelet[2831]: E0114 23:44:23.278022 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.278261 kubelet[2831]: E0114 23:44:23.278174 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.278261 kubelet[2831]: W0114 23:44:23.278182 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.278261 kubelet[2831]: E0114 23:44:23.278190 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.278326 kubelet[2831]: E0114 23:44:23.278302 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.278326 kubelet[2831]: W0114 23:44:23.278308 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.278326 kubelet[2831]: E0114 23:44:23.278315 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.278711 kubelet[2831]: E0114 23:44:23.278421 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.278711 kubelet[2831]: W0114 23:44:23.278434 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.278711 kubelet[2831]: E0114 23:44:23.278441 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.278711 kubelet[2831]: E0114 23:44:23.278553 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.278711 kubelet[2831]: W0114 23:44:23.278558 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.278711 kubelet[2831]: E0114 23:44:23.278567 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.279745 kubelet[2831]: E0114 23:44:23.278760 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.279745 kubelet[2831]: W0114 23:44:23.278769 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.279745 kubelet[2831]: E0114 23:44:23.278777 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.279745 kubelet[2831]: E0114 23:44:23.279719 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.279745 kubelet[2831]: W0114 23:44:23.279734 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.279745 kubelet[2831]: E0114 23:44:23.279746 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.279932 kubelet[2831]: E0114 23:44:23.279904 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.279932 kubelet[2831]: W0114 23:44:23.279922 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.279932 kubelet[2831]: E0114 23:44:23.279932 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.280123 kubelet[2831]: E0114 23:44:23.280105 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.280123 kubelet[2831]: W0114 23:44:23.280115 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.280185 kubelet[2831]: E0114 23:44:23.280124 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.286232 containerd[1607]: time="2026-01-14T23:44:23.286132321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d9z5t,Uid:fa493d4d-0ed4-463e-90cf-b139cb85c6f2,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:23.299540 kubelet[2831]: E0114 23:44:23.299503 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.299540 kubelet[2831]: W0114 23:44:23.299530 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.300050 kubelet[2831]: E0114 23:44:23.299553 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.300050 kubelet[2831]: I0114 23:44:23.299683 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cced8c28-8577-4bc8-b036-c07227b38f48-registration-dir\") pod \"csi-node-driver-kh6gk\" (UID: \"cced8c28-8577-4bc8-b036-c07227b38f48\") " pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:23.300831 kubelet[2831]: E0114 23:44:23.300660 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.300831 kubelet[2831]: W0114 23:44:23.300697 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.300831 kubelet[2831]: E0114 23:44:23.300717 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.300831 kubelet[2831]: I0114 23:44:23.300739 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cced8c28-8577-4bc8-b036-c07227b38f48-varrun\") pod \"csi-node-driver-kh6gk\" (UID: \"cced8c28-8577-4bc8-b036-c07227b38f48\") " pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:23.301321 kubelet[2831]: E0114 23:44:23.301271 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.301321 kubelet[2831]: W0114 23:44:23.301289 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.301719 kubelet[2831]: E0114 23:44:23.301565 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.302700 kubelet[2831]: E0114 23:44:23.302017 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.302700 kubelet[2831]: W0114 23:44:23.302666 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.302882 kubelet[2831]: E0114 23:44:23.302843 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.303645 kubelet[2831]: I0114 23:44:23.302879 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cced8c28-8577-4bc8-b036-c07227b38f48-kubelet-dir\") pod \"csi-node-driver-kh6gk\" (UID: \"cced8c28-8577-4bc8-b036-c07227b38f48\") " pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:23.304675 kubelet[2831]: E0114 23:44:23.303720 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.304675 kubelet[2831]: W0114 23:44:23.303732 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.304675 kubelet[2831]: E0114 23:44:23.303829 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.304675 kubelet[2831]: E0114 23:44:23.304109 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.304675 kubelet[2831]: W0114 23:44:23.304120 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.304675 kubelet[2831]: E0114 23:44:23.304132 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.305113 kubelet[2831]: E0114 23:44:23.305082 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.305113 kubelet[2831]: W0114 23:44:23.305102 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.305113 kubelet[2831]: E0114 23:44:23.305124 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.307243 kubelet[2831]: E0114 23:44:23.307209 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.307243 kubelet[2831]: W0114 23:44:23.307237 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.307243 kubelet[2831]: E0114 23:44:23.307265 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.307424 kubelet[2831]: I0114 23:44:23.307290 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cced8c28-8577-4bc8-b036-c07227b38f48-socket-dir\") pod \"csi-node-driver-kh6gk\" (UID: \"cced8c28-8577-4bc8-b036-c07227b38f48\") " pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:23.307808 kubelet[2831]: E0114 23:44:23.307777 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.307808 kubelet[2831]: W0114 23:44:23.307800 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.308061 kubelet[2831]: E0114 23:44:23.307814 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.308112 kubelet[2831]: E0114 23:44:23.308086 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.308784 kubelet[2831]: W0114 23:44:23.308316 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.308784 kubelet[2831]: E0114 23:44:23.308352 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.308784 kubelet[2831]: I0114 23:44:23.308373 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l75kt\" (UniqueName: \"kubernetes.io/projected/cced8c28-8577-4bc8-b036-c07227b38f48-kube-api-access-l75kt\") pod \"csi-node-driver-kh6gk\" (UID: \"cced8c28-8577-4bc8-b036-c07227b38f48\") " pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:23.309550 kubelet[2831]: E0114 23:44:23.309183 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.309550 kubelet[2831]: W0114 23:44:23.309198 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.309550 kubelet[2831]: E0114 23:44:23.309459 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.310267 kubelet[2831]: E0114 23:44:23.310242 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.310267 kubelet[2831]: W0114 23:44:23.310263 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.310361 kubelet[2831]: E0114 23:44:23.310276 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.311604 kubelet[2831]: E0114 23:44:23.311297 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.311604 kubelet[2831]: W0114 23:44:23.311314 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.311604 kubelet[2831]: E0114 23:44:23.311470 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.312337 kubelet[2831]: E0114 23:44:23.311975 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.312337 kubelet[2831]: W0114 23:44:23.311992 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.312337 kubelet[2831]: E0114 23:44:23.312004 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.312890 kubelet[2831]: E0114 23:44:23.312869 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.312890 kubelet[2831]: W0114 23:44:23.312890 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.312977 kubelet[2831]: E0114 23:44:23.312902 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.339631 containerd[1607]: time="2026-01-14T23:44:23.339135005Z" level=info msg="connecting to shim e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e" address="unix:///run/containerd/s/aed15741f53a7f6717db86a864da4906ab592927aefa151282384894538d75d2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:23.346335 containerd[1607]: time="2026-01-14T23:44:23.345955339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d5f549bb-c2fbx,Uid:ff9c1359-c2af-4cec-a2f7-bc1326c6f32b,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe\"" Jan 14 23:44:23.353611 containerd[1607]: time="2026-01-14T23:44:23.353418135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 23:44:23.391092 systemd[1]: Started cri-containerd-e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e.scope - libcontainer container e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e. Jan 14 23:44:23.401000 audit: BPF prog-id=154 op=LOAD Jan 14 23:44:23.402000 audit: BPF prog-id=155 op=LOAD Jan 14 23:44:23.402000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.402000 audit: BPF prog-id=155 op=UNLOAD Jan 14 23:44:23.402000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.402000 audit: BPF prog-id=156 op=LOAD Jan 14 23:44:23.402000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.402000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.403000 audit: BPF prog-id=157 op=LOAD Jan 14 23:44:23.403000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.403000 audit: BPF prog-id=157 op=UNLOAD Jan 14 23:44:23.403000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.403000 audit: BPF prog-id=156 op=UNLOAD Jan 14 23:44:23.403000 audit[3347]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.403000 audit: BPF prog-id=158 op=LOAD Jan 14 23:44:23.403000 audit[3347]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3336 pid=3347 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:23.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531376464303264346265363264633935356539323532346538383463 Jan 14 23:44:23.414381 kubelet[2831]: E0114 23:44:23.414281 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.414381 kubelet[2831]: W0114 23:44:23.414313 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.414381 kubelet[2831]: E0114 23:44:23.414334 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.414841 kubelet[2831]: E0114 23:44:23.414782 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.414841 kubelet[2831]: W0114 23:44:23.414795 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.415682 kubelet[2831]: E0114 23:44:23.415575 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.415682 kubelet[2831]: W0114 23:44:23.415669 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.415682 kubelet[2831]: E0114 23:44:23.415684 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.416223 kubelet[2831]: E0114 23:44:23.415778 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.416400 kubelet[2831]: E0114 23:44:23.416375 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.416400 kubelet[2831]: W0114 23:44:23.416398 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.416720 kubelet[2831]: E0114 23:44:23.416418 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.416720 kubelet[2831]: E0114 23:44:23.416678 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.416985 kubelet[2831]: W0114 23:44:23.416692 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.416985 kubelet[2831]: E0114 23:44:23.416923 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.417522 kubelet[2831]: E0114 23:44:23.417492 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.417522 kubelet[2831]: W0114 23:44:23.417517 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.417830 kubelet[2831]: E0114 23:44:23.417537 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.418330 kubelet[2831]: E0114 23:44:23.418297 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.418330 kubelet[2831]: W0114 23:44:23.418325 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.418330 kubelet[2831]: E0114 23:44:23.418376 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.418713 kubelet[2831]: E0114 23:44:23.418692 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.418713 kubelet[2831]: W0114 23:44:23.418710 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.418974 kubelet[2831]: E0114 23:44:23.418911 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.419118 kubelet[2831]: E0114 23:44:23.419101 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.419118 kubelet[2831]: W0114 23:44:23.419113 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.419225 kubelet[2831]: E0114 23:44:23.419207 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.419427 kubelet[2831]: E0114 23:44:23.419363 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.419427 kubelet[2831]: W0114 23:44:23.419379 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.419724 kubelet[2831]: E0114 23:44:23.419464 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.419724 kubelet[2831]: E0114 23:44:23.419669 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.419724 kubelet[2831]: W0114 23:44:23.419679 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.419724 kubelet[2831]: E0114 23:44:23.419700 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.420528 kubelet[2831]: E0114 23:44:23.420496 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.420528 kubelet[2831]: W0114 23:44:23.420517 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.420742 kubelet[2831]: E0114 23:44:23.420623 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.421117 kubelet[2831]: E0114 23:44:23.421097 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.421117 kubelet[2831]: W0114 23:44:23.421115 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.421767 kubelet[2831]: E0114 23:44:23.421744 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.422079 kubelet[2831]: E0114 23:44:23.422057 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.422079 kubelet[2831]: W0114 23:44:23.422073 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.423180 kubelet[2831]: E0114 23:44:23.422819 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.423180 kubelet[2831]: W0114 23:44:23.422855 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.423261 kubelet[2831]: E0114 23:44:23.423230 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.423261 kubelet[2831]: W0114 23:44:23.423241 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.423311 kubelet[2831]: E0114 23:44:23.423276 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.423488 kubelet[2831]: E0114 23:44:23.423469 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.423702 kubelet[2831]: E0114 23:44:23.423675 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.423859 kubelet[2831]: E0114 23:44:23.423783 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.423859 kubelet[2831]: W0114 23:44:23.423848 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.423859 kubelet[2831]: E0114 23:44:23.423865 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.424290 kubelet[2831]: E0114 23:44:23.424271 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.424441 kubelet[2831]: W0114 23:44:23.424405 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.424612 kubelet[2831]: E0114 23:44:23.424573 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.425010 kubelet[2831]: E0114 23:44:23.424995 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.425133 kubelet[2831]: W0114 23:44:23.425118 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.425241 kubelet[2831]: E0114 23:44:23.425214 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.425488 kubelet[2831]: E0114 23:44:23.425467 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.425488 kubelet[2831]: W0114 23:44:23.425483 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.425567 kubelet[2831]: E0114 23:44:23.425501 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.426735 kubelet[2831]: E0114 23:44:23.426712 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.426735 kubelet[2831]: W0114 23:44:23.426729 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.427093 kubelet[2831]: E0114 23:44:23.426999 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.427416 kubelet[2831]: E0114 23:44:23.427369 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.427416 kubelet[2831]: W0114 23:44:23.427389 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.427886 kubelet[2831]: E0114 23:44:23.427732 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.428461 kubelet[2831]: E0114 23:44:23.428438 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.428461 kubelet[2831]: W0114 23:44:23.428458 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.428721 kubelet[2831]: E0114 23:44:23.428652 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.428882 kubelet[2831]: E0114 23:44:23.428868 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.428925 kubelet[2831]: W0114 23:44:23.428883 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.428925 kubelet[2831]: E0114 23:44:23.428899 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.430053 kubelet[2831]: E0114 23:44:23.429971 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.430053 kubelet[2831]: W0114 23:44:23.429993 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.430053 kubelet[2831]: E0114 23:44:23.430009 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:23.430440 containerd[1607]: time="2026-01-14T23:44:23.430406880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d9z5t,Uid:fa493d4d-0ed4-463e-90cf-b139cb85c6f2,Namespace:calico-system,Attempt:0,} returns sandbox id \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\"" Jan 14 23:44:23.443091 kubelet[2831]: E0114 23:44:23.443055 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:23.443091 kubelet[2831]: W0114 23:44:23.443078 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:23.443091 kubelet[2831]: E0114 23:44:23.443098 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:24.788933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039730295.mount: Deactivated successfully. Jan 14 23:44:25.299890 kubelet[2831]: E0114 23:44:25.299764 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:25.386893 containerd[1607]: time="2026-01-14T23:44:25.386825292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:25.388077 containerd[1607]: time="2026-01-14T23:44:25.387910189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:25.389177 containerd[1607]: time="2026-01-14T23:44:25.389101055Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:25.391594 containerd[1607]: time="2026-01-14T23:44:25.391526193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:25.392399 containerd[1607]: time="2026-01-14T23:44:25.392367748Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.038788357s" Jan 14 23:44:25.392485 containerd[1607]: time="2026-01-14T23:44:25.392470557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 23:44:25.394731 containerd[1607]: time="2026-01-14T23:44:25.394694996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 23:44:25.411783 containerd[1607]: time="2026-01-14T23:44:25.411517022Z" level=info msg="CreateContainer within sandbox \"1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 23:44:25.428949 containerd[1607]: time="2026-01-14T23:44:25.428475220Z" level=info msg="Container 721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:25.439038 containerd[1607]: time="2026-01-14T23:44:25.438985201Z" level=info msg="CreateContainer within sandbox \"1bedbca7dbce4bc9bfa4fc5cf499e451bc36e6ac60fc129d30979158a5d9affe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6\"" Jan 14 23:44:25.439605 containerd[1607]: time="2026-01-14T23:44:25.439554212Z" level=info msg="StartContainer for \"721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6\"" Jan 14 23:44:25.441706 containerd[1607]: time="2026-01-14T23:44:25.441135233Z" level=info msg="connecting to shim 721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6" address="unix:///run/containerd/s/2f3a3e66da6368c8cc3acfcfb0b157a27e16a62b2ddc1007fafd45b680be51be" protocol=ttrpc version=3 Jan 14 23:44:25.466892 systemd[1]: Started cri-containerd-721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6.scope - libcontainer container 721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6. Jan 14 23:44:25.483000 audit: BPF prog-id=159 op=LOAD Jan 14 23:44:25.484628 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 23:44:25.484696 kernel: audit: type=1334 audit(1768434265.483:549): prog-id=159 op=LOAD Jan 14 23:44:25.484000 audit: BPF prog-id=160 op=LOAD Jan 14 23:44:25.484000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f6180 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.487813 kernel: audit: type=1334 audit(1768434265.484:550): prog-id=160 op=LOAD Jan 14 23:44:25.487994 kernel: audit: type=1300 audit(1768434265.484:550): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f6180 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.488019 kernel: audit: type=1327 audit(1768434265.484:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.484000 audit: BPF prog-id=160 op=UNLOAD Jan 14 23:44:25.490859 kernel: audit: type=1334 audit(1768434265.484:551): prog-id=160 op=UNLOAD Jan 14 23:44:25.490915 kernel: audit: type=1300 audit(1768434265.484:551): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.484000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.493067 kernel: audit: type=1327 audit(1768434265.484:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.495401 kernel: audit: type=1334 audit(1768434265.484:552): prog-id=161 op=LOAD Jan 14 23:44:25.484000 audit: BPF prog-id=161 op=LOAD Jan 14 23:44:25.497811 kernel: audit: type=1300 audit(1768434265.484:552): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f63e8 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.497883 kernel: audit: type=1327 audit(1768434265.484:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.484000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f63e8 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.485000 audit: BPF prog-id=162 op=LOAD Jan 14 23:44:25.485000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001f6168 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.487000 audit: BPF prog-id=162 op=UNLOAD Jan 14 23:44:25.487000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.487000 audit: BPF prog-id=161 op=UNLOAD Jan 14 23:44:25.487000 audit[3410]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.487000 audit: BPF prog-id=163 op=LOAD Jan 14 23:44:25.487000 audit[3410]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f6648 a2=98 a3=0 items=0 ppid=3241 pid=3410 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:25.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732316433333232653530323437363035363435653434353532616263 Jan 14 23:44:25.532043 containerd[1607]: time="2026-01-14T23:44:25.532005128Z" level=info msg="StartContainer for \"721d3322e50247605645e44552abc4d8efc356938ce41584b82f182b57c977d6\" returns successfully" Jan 14 23:44:26.482761 kubelet[2831]: I0114 23:44:26.482486 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d5f549bb-c2fbx" podStartSLOduration=2.439942586 podStartE2EDuration="4.482156143s" podCreationTimestamp="2026-01-14 23:44:22 +0000 UTC" firstStartedPulling="2026-01-14 23:44:23.351466668 +0000 UTC m=+31.175712948" lastFinishedPulling="2026-01-14 23:44:25.393680225 +0000 UTC m=+33.217926505" observedRunningTime="2026-01-14 23:44:26.461249332 +0000 UTC m=+34.285495612" watchObservedRunningTime="2026-01-14 23:44:26.482156143 +0000 UTC m=+34.306402423" Jan 14 23:44:26.503100 kubelet[2831]: E0114 23:44:26.503061 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.503100 kubelet[2831]: W0114 23:44:26.503093 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.503535 kubelet[2831]: E0114 23:44:26.503116 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.503535 kubelet[2831]: E0114 23:44:26.503448 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.503535 kubelet[2831]: W0114 23:44:26.503460 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.503535 kubelet[2831]: E0114 23:44:26.503472 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.503872 kubelet[2831]: E0114 23:44:26.503848 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.503872 kubelet[2831]: W0114 23:44:26.503867 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.503957 kubelet[2831]: E0114 23:44:26.503879 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.504249 kubelet[2831]: E0114 23:44:26.504225 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.504249 kubelet[2831]: W0114 23:44:26.504244 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.504309 kubelet[2831]: E0114 23:44:26.504256 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.504932 kubelet[2831]: E0114 23:44:26.504907 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.504932 kubelet[2831]: W0114 23:44:26.504927 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.505028 kubelet[2831]: E0114 23:44:26.504939 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.505337 kubelet[2831]: E0114 23:44:26.505315 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.505337 kubelet[2831]: W0114 23:44:26.505333 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.505398 kubelet[2831]: E0114 23:44:26.505345 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.505844 kubelet[2831]: E0114 23:44:26.505815 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.505844 kubelet[2831]: W0114 23:44:26.505834 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.505844 kubelet[2831]: E0114 23:44:26.505846 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.506335 kubelet[2831]: E0114 23:44:26.506305 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.506335 kubelet[2831]: W0114 23:44:26.506324 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.506335 kubelet[2831]: E0114 23:44:26.506335 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.507778 kubelet[2831]: E0114 23:44:26.507633 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.507778 kubelet[2831]: W0114 23:44:26.507758 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.507778 kubelet[2831]: E0114 23:44:26.507777 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.508794 kubelet[2831]: E0114 23:44:26.508652 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.508794 kubelet[2831]: W0114 23:44:26.508674 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.508794 kubelet[2831]: E0114 23:44:26.508688 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.510708 kubelet[2831]: E0114 23:44:26.510674 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.510896 kubelet[2831]: W0114 23:44:26.510865 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.510943 kubelet[2831]: E0114 23:44:26.510898 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.511213 kubelet[2831]: E0114 23:44:26.511118 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.511213 kubelet[2831]: W0114 23:44:26.511189 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.511213 kubelet[2831]: E0114 23:44:26.511202 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.512530 kubelet[2831]: E0114 23:44:26.512402 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.512530 kubelet[2831]: W0114 23:44:26.512441 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.512530 kubelet[2831]: E0114 23:44:26.512457 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.513062 kubelet[2831]: E0114 23:44:26.513025 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.513062 kubelet[2831]: W0114 23:44:26.513058 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.513172 kubelet[2831]: E0114 23:44:26.513073 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.513000 audit[3462]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.513000 audit[3462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffb2be630 a2=0 a3=1 items=0 ppid=2980 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.513000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:26.514732 kubelet[2831]: E0114 23:44:26.514541 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.514732 kubelet[2831]: W0114 23:44:26.514553 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.514732 kubelet[2831]: E0114 23:44:26.514565 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.517000 audit[3462]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3462 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:26.517000 audit[3462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffffb2be630 a2=0 a3=1 items=0 ppid=2980 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.517000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:26.545492 kubelet[2831]: E0114 23:44:26.545369 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.545492 kubelet[2831]: W0114 23:44:26.545457 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.545492 kubelet[2831]: E0114 23:44:26.545497 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.545854 kubelet[2831]: E0114 23:44:26.545834 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.545854 kubelet[2831]: W0114 23:44:26.545853 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.546123 kubelet[2831]: E0114 23:44:26.545876 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.546321 kubelet[2831]: E0114 23:44:26.546289 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.546321 kubelet[2831]: W0114 23:44:26.546307 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.546449 kubelet[2831]: E0114 23:44:26.546336 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.546806 kubelet[2831]: E0114 23:44:26.546686 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.546806 kubelet[2831]: W0114 23:44:26.546718 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.546806 kubelet[2831]: E0114 23:44:26.546744 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.547226 kubelet[2831]: E0114 23:44:26.547205 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.547226 kubelet[2831]: W0114 23:44:26.547225 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.547351 kubelet[2831]: E0114 23:44:26.547257 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.547541 kubelet[2831]: E0114 23:44:26.547521 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.547541 kubelet[2831]: W0114 23:44:26.547540 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.547696 kubelet[2831]: E0114 23:44:26.547628 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.547879 kubelet[2831]: E0114 23:44:26.547861 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.547879 kubelet[2831]: W0114 23:44:26.547878 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.547987 kubelet[2831]: E0114 23:44:26.547968 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.548158 kubelet[2831]: E0114 23:44:26.548142 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.548158 kubelet[2831]: W0114 23:44:26.548156 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.548281 kubelet[2831]: E0114 23:44:26.548244 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.548467 kubelet[2831]: E0114 23:44:26.548450 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.548467 kubelet[2831]: W0114 23:44:26.548466 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.548568 kubelet[2831]: E0114 23:44:26.548485 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.549044 kubelet[2831]: E0114 23:44:26.549022 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.549044 kubelet[2831]: W0114 23:44:26.549042 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.549190 kubelet[2831]: E0114 23:44:26.549064 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.549317 kubelet[2831]: E0114 23:44:26.549300 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.549317 kubelet[2831]: W0114 23:44:26.549316 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.549471 kubelet[2831]: E0114 23:44:26.549346 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.549626 kubelet[2831]: E0114 23:44:26.549604 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.549707 kubelet[2831]: W0114 23:44:26.549646 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.549768 kubelet[2831]: E0114 23:44:26.549735 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.549937 kubelet[2831]: E0114 23:44:26.549921 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.549937 kubelet[2831]: W0114 23:44:26.549936 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.550050 kubelet[2831]: E0114 23:44:26.549965 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.550185 kubelet[2831]: E0114 23:44:26.550168 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.550185 kubelet[2831]: W0114 23:44:26.550184 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.550299 kubelet[2831]: E0114 23:44:26.550213 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.550473 kubelet[2831]: E0114 23:44:26.550459 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.550473 kubelet[2831]: W0114 23:44:26.550471 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.550558 kubelet[2831]: E0114 23:44:26.550490 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.550984 kubelet[2831]: E0114 23:44:26.550960 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.550984 kubelet[2831]: W0114 23:44:26.550974 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.550984 kubelet[2831]: E0114 23:44:26.550989 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.553696 kubelet[2831]: E0114 23:44:26.552684 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.553696 kubelet[2831]: W0114 23:44:26.552708 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.553696 kubelet[2831]: E0114 23:44:26.552723 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.553909 kubelet[2831]: E0114 23:44:26.553829 2831 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:44:26.553909 kubelet[2831]: W0114 23:44:26.553842 2831 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:44:26.553909 kubelet[2831]: E0114 23:44:26.553858 2831 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:44:26.802125 containerd[1607]: time="2026-01-14T23:44:26.802057969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:26.803579 containerd[1607]: time="2026-01-14T23:44:26.803526936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:26.804414 containerd[1607]: time="2026-01-14T23:44:26.804370410Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:26.807222 containerd[1607]: time="2026-01-14T23:44:26.807168732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:26.807888 containerd[1607]: time="2026-01-14T23:44:26.807602969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.412848648s" Jan 14 23:44:26.807888 containerd[1607]: time="2026-01-14T23:44:26.807635252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 23:44:26.810943 containerd[1607]: time="2026-01-14T23:44:26.810649273Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 23:44:26.824789 containerd[1607]: time="2026-01-14T23:44:26.823631238Z" level=info msg="Container 2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:26.835931 containerd[1607]: time="2026-01-14T23:44:26.835877098Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e\"" Jan 14 23:44:26.838571 containerd[1607]: time="2026-01-14T23:44:26.838137814Z" level=info msg="StartContainer for \"2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e\"" Jan 14 23:44:26.842139 containerd[1607]: time="2026-01-14T23:44:26.842097997Z" level=info msg="connecting to shim 2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e" address="unix:///run/containerd/s/aed15741f53a7f6717db86a864da4906ab592927aefa151282384894538d75d2" protocol=ttrpc version=3 Jan 14 23:44:26.870900 systemd[1]: Started cri-containerd-2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e.scope - libcontainer container 2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e. Jan 14 23:44:26.921000 audit: BPF prog-id=164 op=LOAD Jan 14 23:44:26.921000 audit[3490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3336 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265353330613962373730333764343932363434653566366231383238 Jan 14 23:44:26.921000 audit: BPF prog-id=165 op=LOAD Jan 14 23:44:26.921000 audit[3490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3336 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265353330613962373730333764343932363434653566366231383238 Jan 14 23:44:26.922000 audit: BPF prog-id=165 op=UNLOAD Jan 14 23:44:26.922000 audit[3490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265353330613962373730333764343932363434653566366231383238 Jan 14 23:44:26.922000 audit: BPF prog-id=164 op=UNLOAD Jan 14 23:44:26.922000 audit[3490]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265353330613962373730333764343932363434653566366231383238 Jan 14 23:44:26.922000 audit: BPF prog-id=166 op=LOAD Jan 14 23:44:26.922000 audit[3490]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3336 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:26.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265353330613962373730333764343932363434653566366231383238 Jan 14 23:44:26.946740 containerd[1607]: time="2026-01-14T23:44:26.946645812Z" level=info msg="StartContainer for \"2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e\" returns successfully" Jan 14 23:44:26.964776 systemd[1]: cri-containerd-2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e.scope: Deactivated successfully. Jan 14 23:44:26.966000 audit: BPF prog-id=166 op=UNLOAD Jan 14 23:44:26.976676 containerd[1607]: time="2026-01-14T23:44:26.976429152Z" level=info msg="received container exit event container_id:\"2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e\" id:\"2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e\" pid:3503 exited_at:{seconds:1768434266 nanos:975895345}" Jan 14 23:44:27.007790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e530a9b77037d492644e5f6b1828aa98a8e44aab3c597818b5f63ac9fe9800e-rootfs.mount: Deactivated successfully. Jan 14 23:44:27.300526 kubelet[2831]: E0114 23:44:27.300328 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:27.454225 containerd[1607]: time="2026-01-14T23:44:27.454156291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 23:44:29.300413 kubelet[2831]: E0114 23:44:29.300205 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:30.015323 containerd[1607]: time="2026-01-14T23:44:30.014683952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:30.016340 containerd[1607]: time="2026-01-14T23:44:30.016284875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 23:44:30.016711 containerd[1607]: time="2026-01-14T23:44:30.016662824Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:30.019813 containerd[1607]: time="2026-01-14T23:44:30.019729499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:30.021487 containerd[1607]: time="2026-01-14T23:44:30.021004797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.566783461s" Jan 14 23:44:30.021487 containerd[1607]: time="2026-01-14T23:44:30.021051961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 23:44:30.023517 containerd[1607]: time="2026-01-14T23:44:30.023484747Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 23:44:30.034868 containerd[1607]: time="2026-01-14T23:44:30.034824777Z" level=info msg="Container d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:30.049423 containerd[1607]: time="2026-01-14T23:44:30.049376013Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837\"" Jan 14 23:44:30.053084 containerd[1607]: time="2026-01-14T23:44:30.053042814Z" level=info msg="StartContainer for \"d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837\"" Jan 14 23:44:30.055640 containerd[1607]: time="2026-01-14T23:44:30.055567087Z" level=info msg="connecting to shim d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837" address="unix:///run/containerd/s/aed15741f53a7f6717db86a864da4906ab592927aefa151282384894538d75d2" protocol=ttrpc version=3 Jan 14 23:44:30.087004 systemd[1]: Started cri-containerd-d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837.scope - libcontainer container d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837. Jan 14 23:44:30.146000 audit: BPF prog-id=167 op=LOAD Jan 14 23:44:30.146000 audit[3548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3336 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:30.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326566386530616361343832333239306631376164366564366464 Jan 14 23:44:30.146000 audit: BPF prog-id=168 op=LOAD Jan 14 23:44:30.146000 audit[3548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3336 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:30.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326566386530616361343832333239306631376164366564366464 Jan 14 23:44:30.146000 audit: BPF prog-id=168 op=UNLOAD Jan 14 23:44:30.146000 audit[3548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:30.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326566386530616361343832333239306631376164366564366464 Jan 14 23:44:30.146000 audit: BPF prog-id=167 op=UNLOAD Jan 14 23:44:30.146000 audit[3548]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:30.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326566386530616361343832333239306631376164366564366464 Jan 14 23:44:30.146000 audit: BPF prog-id=169 op=LOAD Jan 14 23:44:30.146000 audit[3548]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3336 pid=3548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:30.146000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430326566386530616361343832333239306631376164366564366464 Jan 14 23:44:30.176466 containerd[1607]: time="2026-01-14T23:44:30.176136934Z" level=info msg="StartContainer for \"d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837\" returns successfully" Jan 14 23:44:30.770622 containerd[1607]: time="2026-01-14T23:44:30.770513875Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:44:30.774355 systemd[1]: cri-containerd-d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837.scope: Deactivated successfully. Jan 14 23:44:30.775501 systemd[1]: cri-containerd-d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837.scope: Consumed 577ms CPU time, 186.8M memory peak, 165.9M written to disk. Jan 14 23:44:30.779000 audit: BPF prog-id=169 op=UNLOAD Jan 14 23:44:30.781386 kernel: kauditd_printk_skb: 49 callbacks suppressed Jan 14 23:44:30.781547 kernel: audit: type=1334 audit(1768434270.779:570): prog-id=169 op=UNLOAD Jan 14 23:44:30.781602 containerd[1607]: time="2026-01-14T23:44:30.780649172Z" level=info msg="received container exit event container_id:\"d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837\" id:\"d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837\" pid:3561 exited_at:{seconds:1768434270 nanos:779312950}" Jan 14 23:44:30.829978 kubelet[2831]: I0114 23:44:30.829463 2831 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 23:44:30.836993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d02ef8e0aca4823290f17ad6ed6dda9a6edca0872bb25f19880d312b193e1837-rootfs.mount: Deactivated successfully. Jan 14 23:44:30.900455 systemd[1]: Created slice kubepods-burstable-podf51854a3_fd90_4629_82ca_74bacf8c914d.slice - libcontainer container kubepods-burstable-podf51854a3_fd90_4629_82ca_74bacf8c914d.slice. Jan 14 23:44:30.916620 systemd[1]: Created slice kubepods-besteffort-pod5ba12bc6_2080_48f6_9bf7_54c301828a15.slice - libcontainer container kubepods-besteffort-pod5ba12bc6_2080_48f6_9bf7_54c301828a15.slice. Jan 14 23:44:30.937508 systemd[1]: Created slice kubepods-besteffort-pod13840ef0_91ac_4434_84e9_74d7c85cf9e0.slice - libcontainer container kubepods-besteffort-pod13840ef0_91ac_4434_84e9_74d7c85cf9e0.slice. Jan 14 23:44:30.952097 systemd[1]: Created slice kubepods-besteffort-poda676c5b0_5016_4d83_8418_6221cf68e214.slice - libcontainer container kubepods-besteffort-poda676c5b0_5016_4d83_8418_6221cf68e214.slice. Jan 14 23:44:30.962998 systemd[1]: Created slice kubepods-burstable-pod4dde8273_6048_49a9_af62_dad2628bc3c0.slice - libcontainer container kubepods-burstable-pod4dde8273_6048_49a9_af62_dad2628bc3c0.slice. Jan 14 23:44:30.973141 systemd[1]: Created slice kubepods-besteffort-podf86c2f11_4390_42f5_9590_40a5b08260db.slice - libcontainer container kubepods-besteffort-podf86c2f11_4390_42f5_9590_40a5b08260db.slice. Jan 14 23:44:30.981127 systemd[1]: Created slice kubepods-besteffort-pod7928b74a_e68d_4722_9a18_4a12587c5970.slice - libcontainer container kubepods-besteffort-pod7928b74a_e68d_4722_9a18_4a12587c5970.slice. Jan 14 23:44:30.986810 kubelet[2831]: I0114 23:44:30.986717 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7928b74a-e68d-4722-9a18-4a12587c5970-goldmane-ca-bundle\") pod \"goldmane-666569f655-68w75\" (UID: \"7928b74a-e68d-4722-9a18-4a12587c5970\") " pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:30.987149 kubelet[2831]: I0114 23:44:30.987133 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7928b74a-e68d-4722-9a18-4a12587c5970-goldmane-key-pair\") pod \"goldmane-666569f655-68w75\" (UID: \"7928b74a-e68d-4722-9a18-4a12587c5970\") " pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:30.987232 kubelet[2831]: I0114 23:44:30.987219 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5ba12bc6-2080-48f6-9bf7-54c301828a15-tigera-ca-bundle\") pod \"calico-kube-controllers-8479c65bc7-v2wlc\" (UID: \"5ba12bc6-2080-48f6-9bf7-54c301828a15\") " pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" Jan 14 23:44:30.987340 kubelet[2831]: I0114 23:44:30.987329 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7lcl\" (UniqueName: \"kubernetes.io/projected/4dde8273-6048-49a9-af62-dad2628bc3c0-kube-api-access-m7lcl\") pod \"coredns-668d6bf9bc-2hdwq\" (UID: \"4dde8273-6048-49a9-af62-dad2628bc3c0\") " pod="kube-system/coredns-668d6bf9bc-2hdwq" Jan 14 23:44:30.987493 kubelet[2831]: I0114 23:44:30.987434 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8dz\" (UniqueName: \"kubernetes.io/projected/f86c2f11-4390-42f5-9590-40a5b08260db-kube-api-access-wx8dz\") pod \"calico-apiserver-866b97bccb-j7fj2\" (UID: \"f86c2f11-4390-42f5-9590-40a5b08260db\") " pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" Jan 14 23:44:30.987928 kubelet[2831]: I0114 23:44:30.987903 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7928b74a-e68d-4722-9a18-4a12587c5970-config\") pod \"goldmane-666569f655-68w75\" (UID: \"7928b74a-e68d-4722-9a18-4a12587c5970\") " pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:30.988080 kubelet[2831]: I0114 23:44:30.988064 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f86c2f11-4390-42f5-9590-40a5b08260db-calico-apiserver-certs\") pod \"calico-apiserver-866b97bccb-j7fj2\" (UID: \"f86c2f11-4390-42f5-9590-40a5b08260db\") " pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" Jan 14 23:44:30.988183 kubelet[2831]: I0114 23:44:30.988171 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-backend-key-pair\") pod \"whisker-8687568cb9-ws4jl\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " pod="calico-system/whisker-8687568cb9-ws4jl" Jan 14 23:44:30.988329 kubelet[2831]: I0114 23:44:30.988244 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-ca-bundle\") pod \"whisker-8687568cb9-ws4jl\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " pod="calico-system/whisker-8687568cb9-ws4jl" Jan 14 23:44:30.988329 kubelet[2831]: I0114 23:44:30.988272 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff6qr\" (UniqueName: \"kubernetes.io/projected/f51854a3-fd90-4629-82ca-74bacf8c914d-kube-api-access-ff6qr\") pod \"coredns-668d6bf9bc-x9dt4\" (UID: \"f51854a3-fd90-4629-82ca-74bacf8c914d\") " pod="kube-system/coredns-668d6bf9bc-x9dt4" Jan 14 23:44:30.988497 kubelet[2831]: I0114 23:44:30.988428 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdjc\" (UniqueName: \"kubernetes.io/projected/5ba12bc6-2080-48f6-9bf7-54c301828a15-kube-api-access-7cdjc\") pod \"calico-kube-controllers-8479c65bc7-v2wlc\" (UID: \"5ba12bc6-2080-48f6-9bf7-54c301828a15\") " pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" Jan 14 23:44:30.988497 kubelet[2831]: I0114 23:44:30.988462 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shbx6\" (UniqueName: \"kubernetes.io/projected/13840ef0-91ac-4434-84e9-74d7c85cf9e0-kube-api-access-shbx6\") pod \"whisker-8687568cb9-ws4jl\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " pod="calico-system/whisker-8687568cb9-ws4jl" Jan 14 23:44:30.989808 kubelet[2831]: I0114 23:44:30.988482 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qchrd\" (UniqueName: \"kubernetes.io/projected/a676c5b0-5016-4d83-8418-6221cf68e214-kube-api-access-qchrd\") pod \"calico-apiserver-866b97bccb-lgshp\" (UID: \"a676c5b0-5016-4d83-8418-6221cf68e214\") " pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" Jan 14 23:44:30.989971 kubelet[2831]: I0114 23:44:30.989908 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4dde8273-6048-49a9-af62-dad2628bc3c0-config-volume\") pod \"coredns-668d6bf9bc-2hdwq\" (UID: \"4dde8273-6048-49a9-af62-dad2628bc3c0\") " pod="kube-system/coredns-668d6bf9bc-2hdwq" Jan 14 23:44:30.989971 kubelet[2831]: I0114 23:44:30.989929 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f51854a3-fd90-4629-82ca-74bacf8c914d-config-volume\") pod \"coredns-668d6bf9bc-x9dt4\" (UID: \"f51854a3-fd90-4629-82ca-74bacf8c914d\") " pod="kube-system/coredns-668d6bf9bc-x9dt4" Jan 14 23:44:30.990556 kubelet[2831]: I0114 23:44:30.990431 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbqjp\" (UniqueName: \"kubernetes.io/projected/7928b74a-e68d-4722-9a18-4a12587c5970-kube-api-access-zbqjp\") pod \"goldmane-666569f655-68w75\" (UID: \"7928b74a-e68d-4722-9a18-4a12587c5970\") " pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:30.990556 kubelet[2831]: I0114 23:44:30.990466 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a676c5b0-5016-4d83-8418-6221cf68e214-calico-apiserver-certs\") pod \"calico-apiserver-866b97bccb-lgshp\" (UID: \"a676c5b0-5016-4d83-8418-6221cf68e214\") " pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" Jan 14 23:44:31.213230 containerd[1607]: time="2026-01-14T23:44:31.213147292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9dt4,Uid:f51854a3-fd90-4629-82ca-74bacf8c914d,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:31.239424 containerd[1607]: time="2026-01-14T23:44:31.239359727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8479c65bc7-v2wlc,Uid:5ba12bc6-2080-48f6-9bf7-54c301828a15,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:31.251031 containerd[1607]: time="2026-01-14T23:44:31.250967393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8687568cb9-ws4jl,Uid:13840ef0-91ac-4434-84e9-74d7c85cf9e0,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:31.265062 containerd[1607]: time="2026-01-14T23:44:31.264956156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-lgshp,Uid:a676c5b0-5016-4d83-8418-6221cf68e214,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:31.274056 containerd[1607]: time="2026-01-14T23:44:31.273999951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hdwq,Uid:4dde8273-6048-49a9-af62-dad2628bc3c0,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:31.281153 containerd[1607]: time="2026-01-14T23:44:31.281074158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-j7fj2,Uid:f86c2f11-4390-42f5-9590-40a5b08260db,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:31.297532 containerd[1607]: time="2026-01-14T23:44:31.297458060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-68w75,Uid:7928b74a-e68d-4722-9a18-4a12587c5970,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:31.309827 systemd[1]: Created slice kubepods-besteffort-podcced8c28_8577_4bc8_b036_c07227b38f48.slice - libcontainer container kubepods-besteffort-podcced8c28_8577_4bc8_b036_c07227b38f48.slice. Jan 14 23:44:31.316225 containerd[1607]: time="2026-01-14T23:44:31.316166416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kh6gk,Uid:cced8c28-8577-4bc8-b036-c07227b38f48,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:31.484618 containerd[1607]: time="2026-01-14T23:44:31.483522417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 23:44:31.512242 containerd[1607]: time="2026-01-14T23:44:31.511722601Z" level=error msg="Failed to destroy network for sandbox \"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.528846 containerd[1607]: time="2026-01-14T23:44:31.526722999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8479c65bc7-v2wlc,Uid:5ba12bc6-2080-48f6-9bf7-54c301828a15,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.528846 containerd[1607]: time="2026-01-14T23:44:31.527692912Z" level=error msg="Failed to destroy network for sandbox \"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.529097 kubelet[2831]: E0114 23:44:31.527104 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.529097 kubelet[2831]: E0114 23:44:31.527167 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" Jan 14 23:44:31.529097 kubelet[2831]: E0114 23:44:31.527188 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" Jan 14 23:44:31.529311 kubelet[2831]: E0114 23:44:31.528654 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6c68040715064db9941c71b053cee316665da15311df0a0c18af181f1fb2b1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:44:31.541922 containerd[1607]: time="2026-01-14T23:44:31.541729199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9dt4,Uid:f51854a3-fd90-4629-82ca-74bacf8c914d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.543801 kubelet[2831]: E0114 23:44:31.543735 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.544292 kubelet[2831]: E0114 23:44:31.544012 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x9dt4" Jan 14 23:44:31.544292 kubelet[2831]: E0114 23:44:31.544234 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x9dt4" Jan 14 23:44:31.544528 kubelet[2831]: E0114 23:44:31.544391 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x9dt4_kube-system(f51854a3-fd90-4629-82ca-74bacf8c914d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x9dt4_kube-system(f51854a3-fd90-4629-82ca-74bacf8c914d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d45313daffdf935238d124f69aba343ca66b39c87046ce7579e0e96d79bcbac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x9dt4" podUID="f51854a3-fd90-4629-82ca-74bacf8c914d" Jan 14 23:44:31.562990 containerd[1607]: time="2026-01-14T23:44:31.562933860Z" level=error msg="Failed to destroy network for sandbox \"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.566864 containerd[1607]: time="2026-01-14T23:44:31.566761786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hdwq,Uid:4dde8273-6048-49a9-af62-dad2628bc3c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.567388 kubelet[2831]: E0114 23:44:31.567195 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.567388 kubelet[2831]: E0114 23:44:31.567250 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2hdwq" Jan 14 23:44:31.567388 kubelet[2831]: E0114 23:44:31.567277 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-2hdwq" Jan 14 23:44:31.568604 kubelet[2831]: E0114 23:44:31.567642 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-2hdwq_kube-system(4dde8273-6048-49a9-af62-dad2628bc3c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-2hdwq_kube-system(4dde8273-6048-49a9-af62-dad2628bc3c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf3724753fca3ec01a538935ce1ece6532096bbab8027c87b036acb4956af94c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-2hdwq" podUID="4dde8273-6048-49a9-af62-dad2628bc3c0" Jan 14 23:44:31.569653 containerd[1607]: time="2026-01-14T23:44:31.569464627Z" level=error msg="Failed to destroy network for sandbox \"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.569882 containerd[1607]: time="2026-01-14T23:44:31.569843495Z" level=error msg="Failed to destroy network for sandbox \"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.573590 containerd[1607]: time="2026-01-14T23:44:31.573304594Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-lgshp,Uid:a676c5b0-5016-4d83-8418-6221cf68e214,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.573841 kubelet[2831]: E0114 23:44:31.573802 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.573960 kubelet[2831]: E0114 23:44:31.573863 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" Jan 14 23:44:31.573960 kubelet[2831]: E0114 23:44:31.573886 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" Jan 14 23:44:31.573960 kubelet[2831]: E0114 23:44:31.573938 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0920b62ed9f13f5c8ac92f2e76edec4fa9cbdee38052b97a423b3e478e1b939d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:44:31.575033 containerd[1607]: time="2026-01-14T23:44:31.574982199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8687568cb9-ws4jl,Uid:13840ef0-91ac-4434-84e9-74d7c85cf9e0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.576637 kubelet[2831]: E0114 23:44:31.576603 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.576924 kubelet[2831]: E0114 23:44:31.576873 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8687568cb9-ws4jl" Jan 14 23:44:31.577153 kubelet[2831]: E0114 23:44:31.576899 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8687568cb9-ws4jl" Jan 14 23:44:31.577153 kubelet[2831]: E0114 23:44:31.577071 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8687568cb9-ws4jl_calico-system(13840ef0-91ac-4434-84e9-74d7c85cf9e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8687568cb9-ws4jl_calico-system(13840ef0-91ac-4434-84e9-74d7c85cf9e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57e6172e50d7c3bfa511140d3243949db8d1d7e5d5eccd002a9d1565021006d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8687568cb9-ws4jl" podUID="13840ef0-91ac-4434-84e9-74d7c85cf9e0" Jan 14 23:44:31.589902 containerd[1607]: time="2026-01-14T23:44:31.589759541Z" level=error msg="Failed to destroy network for sandbox \"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.593478 containerd[1607]: time="2026-01-14T23:44:31.593334007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-j7fj2,Uid:f86c2f11-4390-42f5-9590-40a5b08260db,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.594983 kubelet[2831]: E0114 23:44:31.594937 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.595801 kubelet[2831]: E0114 23:44:31.595659 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" Jan 14 23:44:31.595801 kubelet[2831]: E0114 23:44:31.595712 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" Jan 14 23:44:31.596070 kubelet[2831]: E0114 23:44:31.596000 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3113f473e1ece126db4c29a9662c4c33b0b0280d45ac4257f025e770279a02d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:44:31.604576 containerd[1607]: time="2026-01-14T23:44:31.604530042Z" level=error msg="Failed to destroy network for sandbox \"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.609044 containerd[1607]: time="2026-01-14T23:44:31.608890488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kh6gk,Uid:cced8c28-8577-4bc8-b036-c07227b38f48,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.609675 kubelet[2831]: E0114 23:44:31.609550 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.609675 kubelet[2831]: E0114 23:44:31.609654 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:31.610055 kubelet[2831]: E0114 23:44:31.610023 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kh6gk" Jan 14 23:44:31.610109 kubelet[2831]: E0114 23:44:31.610089 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b91ef7dfbb365fd0ed50a37f2819a763ca1bafeb808a9d6aba9d3120093f366b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:31.612106 containerd[1607]: time="2026-01-14T23:44:31.611920594Z" level=error msg="Failed to destroy network for sandbox \"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.614747 containerd[1607]: time="2026-01-14T23:44:31.614639956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-68w75,Uid:7928b74a-e68d-4722-9a18-4a12587c5970,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.615759 kubelet[2831]: E0114 23:44:31.615691 2831 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:44:31.615759 kubelet[2831]: E0114 23:44:31.615743 2831 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:31.616163 kubelet[2831]: E0114 23:44:31.615761 2831 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-68w75" Jan 14 23:44:31.616163 kubelet[2831]: E0114 23:44:31.615839 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3288ebb15d8188efc901b50016e67cc90a8f62fd8c0df5a534c791c1fbde7fc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:44:36.269896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978242247.mount: Deactivated successfully. Jan 14 23:44:36.292900 containerd[1607]: time="2026-01-14T23:44:36.292785702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.294446 containerd[1607]: time="2026-01-14T23:44:36.293769407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 23:44:36.295472 containerd[1607]: time="2026-01-14T23:44:36.295418155Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.297604 containerd[1607]: time="2026-01-14T23:44:36.297525414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:36.298372 containerd[1607]: time="2026-01-14T23:44:36.298337428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.814766327s" Jan 14 23:44:36.298487 containerd[1607]: time="2026-01-14T23:44:36.298471197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 23:44:36.320377 containerd[1607]: time="2026-01-14T23:44:36.320334877Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 23:44:36.338746 containerd[1607]: time="2026-01-14T23:44:36.337362238Z" level=info msg="Container b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:36.354236 containerd[1607]: time="2026-01-14T23:44:36.354159825Z" level=info msg="CreateContainer within sandbox \"e17dd02d4be62dc955e92524e884c9da1a46a2c76b3955fd27723a75a11bff2e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932\"" Jan 14 23:44:36.356556 containerd[1607]: time="2026-01-14T23:44:36.356517260Z" level=info msg="StartContainer for \"b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932\"" Jan 14 23:44:36.358910 containerd[1607]: time="2026-01-14T23:44:36.358870095Z" level=info msg="connecting to shim b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932" address="unix:///run/containerd/s/aed15741f53a7f6717db86a864da4906ab592927aefa151282384894538d75d2" protocol=ttrpc version=3 Jan 14 23:44:36.392906 systemd[1]: Started cri-containerd-b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932.scope - libcontainer container b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932. Jan 14 23:44:36.458000 audit: BPF prog-id=170 op=LOAD Jan 14 23:44:36.458000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.465950 kernel: audit: type=1334 audit(1768434276.458:571): prog-id=170 op=LOAD Jan 14 23:44:36.466079 kernel: audit: type=1300 audit(1768434276.458:571): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.469866 kernel: audit: type=1327 audit(1768434276.458:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.470552 kernel: audit: type=1334 audit(1768434276.458:572): prog-id=171 op=LOAD Jan 14 23:44:36.471394 kernel: audit: type=1300 audit(1768434276.458:572): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.458000 audit: BPF prog-id=171 op=LOAD Jan 14 23:44:36.458000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.458000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.475866 kernel: audit: type=1327 audit(1768434276.458:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.459000 audit: BPF prog-id=171 op=UNLOAD Jan 14 23:44:36.477613 kernel: audit: type=1334 audit(1768434276.459:573): prog-id=171 op=UNLOAD Jan 14 23:44:36.459000 audit[3821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.480636 kernel: audit: type=1300 audit(1768434276.459:573): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.483356 kernel: audit: type=1327 audit(1768434276.459:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.484839 kernel: audit: type=1334 audit(1768434276.459:574): prog-id=170 op=UNLOAD Jan 14 23:44:36.459000 audit: BPF prog-id=170 op=UNLOAD Jan 14 23:44:36.459000 audit[3821]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.459000 audit: BPF prog-id=172 op=LOAD Jan 14 23:44:36.459000 audit[3821]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3336 pid=3821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:36.459000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239666436366237303936376238633064613366393664343135373330 Jan 14 23:44:36.512540 containerd[1607]: time="2026-01-14T23:44:36.510862027Z" level=info msg="StartContainer for \"b9fd66b70967b8c0da3f96d4157307fc338d78a3dfdc2255569909a3982c2932\" returns successfully" Jan 14 23:44:36.695641 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 23:44:36.695903 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 23:44:36.943382 kubelet[2831]: I0114 23:44:36.943203 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-backend-key-pair\") pod \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " Jan 14 23:44:36.944040 kubelet[2831]: I0114 23:44:36.943638 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-ca-bundle\") pod \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " Jan 14 23:44:36.944040 kubelet[2831]: I0114 23:44:36.943668 2831 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shbx6\" (UniqueName: \"kubernetes.io/projected/13840ef0-91ac-4434-84e9-74d7c85cf9e0-kube-api-access-shbx6\") pod \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\" (UID: \"13840ef0-91ac-4434-84e9-74d7c85cf9e0\") " Jan 14 23:44:36.953678 kubelet[2831]: I0114 23:44:36.953286 2831 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "13840ef0-91ac-4434-84e9-74d7c85cf9e0" (UID: "13840ef0-91ac-4434-84e9-74d7c85cf9e0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 23:44:36.957201 kubelet[2831]: I0114 23:44:36.957147 2831 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "13840ef0-91ac-4434-84e9-74d7c85cf9e0" (UID: "13840ef0-91ac-4434-84e9-74d7c85cf9e0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 23:44:36.958031 kubelet[2831]: I0114 23:44:36.957990 2831 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13840ef0-91ac-4434-84e9-74d7c85cf9e0-kube-api-access-shbx6" (OuterVolumeSpecName: "kube-api-access-shbx6") pod "13840ef0-91ac-4434-84e9-74d7c85cf9e0" (UID: "13840ef0-91ac-4434-84e9-74d7c85cf9e0"). InnerVolumeSpecName "kube-api-access-shbx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 23:44:37.045048 kubelet[2831]: I0114 23:44:37.044982 2831 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-backend-key-pair\") on node \"ci-4515-1-0-n-abf6d467b1\" DevicePath \"\"" Jan 14 23:44:37.045048 kubelet[2831]: I0114 23:44:37.045053 2831 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13840ef0-91ac-4434-84e9-74d7c85cf9e0-whisker-ca-bundle\") on node \"ci-4515-1-0-n-abf6d467b1\" DevicePath \"\"" Jan 14 23:44:37.045268 kubelet[2831]: I0114 23:44:37.045078 2831 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-shbx6\" (UniqueName: \"kubernetes.io/projected/13840ef0-91ac-4434-84e9-74d7c85cf9e0-kube-api-access-shbx6\") on node \"ci-4515-1-0-n-abf6d467b1\" DevicePath \"\"" Jan 14 23:44:37.271781 systemd[1]: var-lib-kubelet-pods-13840ef0\x2d91ac\x2d4434\x2d84e9\x2d74d7c85cf9e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshbx6.mount: Deactivated successfully. Jan 14 23:44:37.271881 systemd[1]: var-lib-kubelet-pods-13840ef0\x2d91ac\x2d4434\x2d84e9\x2d74d7c85cf9e0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 23:44:37.512884 systemd[1]: Removed slice kubepods-besteffort-pod13840ef0_91ac_4434_84e9_74d7c85cf9e0.slice - libcontainer container kubepods-besteffort-pod13840ef0_91ac_4434_84e9_74d7c85cf9e0.slice. Jan 14 23:44:37.529028 kubelet[2831]: I0114 23:44:37.528419 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d9z5t" podStartSLOduration=2.659628146 podStartE2EDuration="15.528393301s" podCreationTimestamp="2026-01-14 23:44:22 +0000 UTC" firstStartedPulling="2026-01-14 23:44:23.431988512 +0000 UTC m=+31.256234792" lastFinishedPulling="2026-01-14 23:44:36.300753667 +0000 UTC m=+44.124999947" observedRunningTime="2026-01-14 23:44:37.527479162 +0000 UTC m=+45.351725442" watchObservedRunningTime="2026-01-14 23:44:37.528393301 +0000 UTC m=+45.352639581" Jan 14 23:44:37.600879 systemd[1]: Created slice kubepods-besteffort-pod462591b6_7c04_4bb8_91df_676e6e4e63fa.slice - libcontainer container kubepods-besteffort-pod462591b6_7c04_4bb8_91df_676e6e4e63fa.slice. Jan 14 23:44:37.651678 kubelet[2831]: I0114 23:44:37.651534 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462591b6-7c04-4bb8-91df-676e6e4e63fa-whisker-ca-bundle\") pod \"whisker-64f4bfdcf7-qqvhh\" (UID: \"462591b6-7c04-4bb8-91df-676e6e4e63fa\") " pod="calico-system/whisker-64f4bfdcf7-qqvhh" Jan 14 23:44:37.652423 kubelet[2831]: I0114 23:44:37.652398 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwn68\" (UniqueName: \"kubernetes.io/projected/462591b6-7c04-4bb8-91df-676e6e4e63fa-kube-api-access-dwn68\") pod \"whisker-64f4bfdcf7-qqvhh\" (UID: \"462591b6-7c04-4bb8-91df-676e6e4e63fa\") " pod="calico-system/whisker-64f4bfdcf7-qqvhh" Jan 14 23:44:37.652562 kubelet[2831]: I0114 23:44:37.652545 2831 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/462591b6-7c04-4bb8-91df-676e6e4e63fa-whisker-backend-key-pair\") pod \"whisker-64f4bfdcf7-qqvhh\" (UID: \"462591b6-7c04-4bb8-91df-676e6e4e63fa\") " pod="calico-system/whisker-64f4bfdcf7-qqvhh" Jan 14 23:44:37.908358 containerd[1607]: time="2026-01-14T23:44:37.908197336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f4bfdcf7-qqvhh,Uid:462591b6-7c04-4bb8-91df-676e6e4e63fa,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:38.143574 systemd-networkd[1498]: calid0745118252: Link UP Jan 14 23:44:38.144097 systemd-networkd[1498]: calid0745118252: Gained carrier Jan 14 23:44:38.167675 containerd[1607]: 2026-01-14 23:44:37.945 [INFO][3910] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 23:44:38.167675 containerd[1607]: 2026-01-14 23:44:38.004 [INFO][3910] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0 whisker-64f4bfdcf7- calico-system 462591b6-7c04-4bb8-91df-676e6e4e63fa 871 0 2026-01-14 23:44:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64f4bfdcf7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 whisker-64f4bfdcf7-qqvhh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid0745118252 [] [] }} ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-" Jan 14 23:44:38.167675 containerd[1607]: 2026-01-14 23:44:38.004 [INFO][3910] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.167675 containerd[1607]: 2026-01-14 23:44:38.070 [INFO][3921] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" HandleID="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Workload="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.070 [INFO][3921] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" HandleID="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Workload="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b740), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"whisker-64f4bfdcf7-qqvhh", "timestamp":"2026-01-14 23:44:38.070468099 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.070 [INFO][3921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.070 [INFO][3921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.070 [INFO][3921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.085 [INFO][3921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.094 [INFO][3921] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.106 [INFO][3921] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.109 [INFO][3921] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.167882 containerd[1607]: 2026-01-14 23:44:38.113 [INFO][3921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.113 [INFO][3921] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.115 [INFO][3921] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3 Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.121 [INFO][3921] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.129 [INFO][3921] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.129/26] block=192.168.89.128/26 handle="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.130 [INFO][3921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.129/26] handle="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.130 [INFO][3921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:38.168060 containerd[1607]: 2026-01-14 23:44:38.131 [INFO][3921] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.129/26] IPv6=[] ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" HandleID="k8s-pod-network.6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Workload="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.168286 containerd[1607]: 2026-01-14 23:44:38.134 [INFO][3910] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0", GenerateName:"whisker-64f4bfdcf7-", Namespace:"calico-system", SelfLink:"", UID:"462591b6-7c04-4bb8-91df-676e6e4e63fa", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64f4bfdcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"whisker-64f4bfdcf7-qqvhh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0745118252", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:38.168286 containerd[1607]: 2026-01-14 23:44:38.134 [INFO][3910] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.129/32] ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.168376 containerd[1607]: 2026-01-14 23:44:38.134 [INFO][3910] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0745118252 ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.168376 containerd[1607]: 2026-01-14 23:44:38.144 [INFO][3910] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.168420 containerd[1607]: 2026-01-14 23:44:38.145 [INFO][3910] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0", GenerateName:"whisker-64f4bfdcf7-", Namespace:"calico-system", SelfLink:"", UID:"462591b6-7c04-4bb8-91df-676e6e4e63fa", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64f4bfdcf7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3", Pod:"whisker-64f4bfdcf7-qqvhh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.89.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0745118252", MAC:"86:8b:71:84:97:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:38.168536 containerd[1607]: 2026-01-14 23:44:38.161 [INFO][3910] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" Namespace="calico-system" Pod="whisker-64f4bfdcf7-qqvhh" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-whisker--64f4bfdcf7--qqvhh-eth0" Jan 14 23:44:38.208747 containerd[1607]: time="2026-01-14T23:44:38.208624617Z" level=info msg="connecting to shim 6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3" address="unix:///run/containerd/s/ed7e75cc381e8be1f9e75f12226e7abb90dd59f74b51620099ccb17c021b1944" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:38.241845 systemd[1]: Started cri-containerd-6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3.scope - libcontainer container 6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3. Jan 14 23:44:38.305225 kubelet[2831]: I0114 23:44:38.305174 2831 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13840ef0-91ac-4434-84e9-74d7c85cf9e0" path="/var/lib/kubelet/pods/13840ef0-91ac-4434-84e9-74d7c85cf9e0/volumes" Jan 14 23:44:38.312000 audit: BPF prog-id=173 op=LOAD Jan 14 23:44:38.312000 audit: BPF prog-id=174 op=LOAD Jan 14 23:44:38.312000 audit[3984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.312000 audit: BPF prog-id=174 op=UNLOAD Jan 14 23:44:38.312000 audit[3984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.312000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.314000 audit: BPF prog-id=175 op=LOAD Jan 14 23:44:38.314000 audit[3984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.314000 audit: BPF prog-id=176 op=LOAD Jan 14 23:44:38.314000 audit[3984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.314000 audit: BPF prog-id=176 op=UNLOAD Jan 14 23:44:38.314000 audit[3984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.314000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.316000 audit: BPF prog-id=175 op=UNLOAD Jan 14 23:44:38.316000 audit[3984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.316000 audit: BPF prog-id=177 op=LOAD Jan 14 23:44:38.316000 audit[3984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3966 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3663386335633161616534623037326561653764383661653861656263 Jan 14 23:44:38.390691 containerd[1607]: time="2026-01-14T23:44:38.390560578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64f4bfdcf7-qqvhh,Uid:462591b6-7c04-4bb8-91df-676e6e4e63fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c8c5c1aae4b072eae7d86ae8aebc11c1ee46ec01635fa55fd9b2e5cb61193a3\"" Jan 14 23:44:38.393339 containerd[1607]: time="2026-01-14T23:44:38.392278406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:44:38.728000 audit: BPF prog-id=178 op=LOAD Jan 14 23:44:38.728000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff478d468 a2=98 a3=fffff478d458 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.728000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.729000 audit: BPF prog-id=178 op=UNLOAD Jan 14 23:44:38.729000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff478d438 a3=0 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.729000 audit: BPF prog-id=179 op=LOAD Jan 14 23:44:38.729000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff478d318 a2=74 a3=95 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.729000 audit: BPF prog-id=179 op=UNLOAD Jan 14 23:44:38.729000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.729000 audit: BPF prog-id=180 op=LOAD Jan 14 23:44:38.729000 audit[4124]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff478d348 a2=40 a3=fffff478d378 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.729000 audit: BPF prog-id=180 op=UNLOAD Jan 14 23:44:38.729000 audit[4124]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff478d378 items=0 ppid=3949 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.729000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:44:38.738000 audit: BPF prog-id=181 op=LOAD Jan 14 23:44:38.738000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc84392d8 a2=98 a3=ffffc84392c8 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.738000 audit: BPF prog-id=181 op=UNLOAD Jan 14 23:44:38.738000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc84392a8 a3=0 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.738000 audit: BPF prog-id=182 op=LOAD Jan 14 23:44:38.738000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8438f68 a2=74 a3=95 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.738000 audit: BPF prog-id=182 op=UNLOAD Jan 14 23:44:38.738000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.738000 audit: BPF prog-id=183 op=LOAD Jan 14 23:44:38.738000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8438fc8 a2=94 a3=2 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.738000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.739000 audit: BPF prog-id=183 op=UNLOAD Jan 14 23:44:38.739000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.759624 containerd[1607]: time="2026-01-14T23:44:38.758424111Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:38.761465 containerd[1607]: time="2026-01-14T23:44:38.761371777Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:44:38.761680 containerd[1607]: time="2026-01-14T23:44:38.761575750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:38.762265 kubelet[2831]: E0114 23:44:38.762025 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:38.762265 kubelet[2831]: E0114 23:44:38.762080 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:38.769876 kubelet[2831]: E0114 23:44:38.769803 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71d58048d3ba4bd9a20f57135d4493c5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:38.772065 containerd[1607]: time="2026-01-14T23:44:38.771992527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:44:38.860000 audit: BPF prog-id=184 op=LOAD Jan 14 23:44:38.860000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc8438f88 a2=40 a3=ffffc8438fb8 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.860000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.860000 audit: BPF prog-id=184 op=UNLOAD Jan 14 23:44:38.860000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc8438fb8 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.860000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.870000 audit: BPF prog-id=185 op=LOAD Jan 14 23:44:38.870000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc8438f98 a2=94 a3=4 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.870000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.870000 audit: BPF prog-id=185 op=UNLOAD Jan 14 23:44:38.870000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.870000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.871000 audit: BPF prog-id=186 op=LOAD Jan 14 23:44:38.871000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc8438dd8 a2=94 a3=5 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.871000 audit: BPF prog-id=186 op=UNLOAD Jan 14 23:44:38.871000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.871000 audit: BPF prog-id=187 op=LOAD Jan 14 23:44:38.871000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc8439008 a2=94 a3=6 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.871000 audit: BPF prog-id=187 op=UNLOAD Jan 14 23:44:38.871000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.871000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.872000 audit: BPF prog-id=188 op=LOAD Jan 14 23:44:38.872000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc84387d8 a2=94 a3=83 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.872000 audit: BPF prog-id=189 op=LOAD Jan 14 23:44:38.872000 audit[4125]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc8438598 a2=94 a3=2 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.872000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.873000 audit: BPF prog-id=189 op=UNLOAD Jan 14 23:44:38.873000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.873000 audit: BPF prog-id=188 op=UNLOAD Jan 14 23:44:38.873000 audit[4125]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=20459620 a3=2044cb00 items=0 ppid=3949 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.873000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:44:38.884000 audit: BPF prog-id=190 op=LOAD Jan 14 23:44:38.884000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3ea73f8 a2=98 a3=fffff3ea73e8 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.884000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.885000 audit: BPF prog-id=190 op=UNLOAD Jan 14 23:44:38.885000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff3ea73c8 a3=0 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.885000 audit: BPF prog-id=191 op=LOAD Jan 14 23:44:38.885000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3ea72a8 a2=74 a3=95 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.885000 audit: BPF prog-id=191 op=UNLOAD Jan 14 23:44:38.885000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.885000 audit: BPF prog-id=192 op=LOAD Jan 14 23:44:38.885000 audit[4128]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff3ea72d8 a2=40 a3=fffff3ea7308 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.885000 audit: BPF prog-id=192 op=UNLOAD Jan 14 23:44:38.885000 audit[4128]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff3ea7308 items=0 ppid=3949 pid=4128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:38.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:44:38.970828 systemd-networkd[1498]: vxlan.calico: Link UP Jan 14 23:44:38.972717 systemd-networkd[1498]: vxlan.calico: Gained carrier Jan 14 23:44:39.008000 audit: BPF prog-id=193 op=LOAD Jan 14 23:44:39.008000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7418f98 a2=98 a3=ffffc7418f88 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.008000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.008000 audit: BPF prog-id=193 op=UNLOAD Jan 14 23:44:39.008000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc7418f68 a3=0 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.008000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.010000 audit: BPF prog-id=194 op=LOAD Jan 14 23:44:39.010000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7418c78 a2=74 a3=95 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.010000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.010000 audit: BPF prog-id=194 op=UNLOAD Jan 14 23:44:39.010000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.010000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.010000 audit: BPF prog-id=195 op=LOAD Jan 14 23:44:39.010000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc7418cd8 a2=94 a3=2 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.010000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.012000 audit: BPF prog-id=195 op=UNLOAD Jan 14 23:44:39.012000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.012000 audit: BPF prog-id=196 op=LOAD Jan 14 23:44:39.012000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7418b58 a2=40 a3=ffffc7418b88 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.012000 audit: BPF prog-id=196 op=UNLOAD Jan 14 23:44:39.012000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc7418b88 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.012000 audit: BPF prog-id=197 op=LOAD Jan 14 23:44:39.012000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7418ca8 a2=94 a3=b7 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.012000 audit: BPF prog-id=197 op=UNLOAD Jan 14 23:44:39.012000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.014000 audit: BPF prog-id=198 op=LOAD Jan 14 23:44:39.014000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc7418358 a2=94 a3=2 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.014000 audit: BPF prog-id=198 op=UNLOAD Jan 14 23:44:39.014000 audit[4153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.014000 audit: BPF prog-id=199 op=LOAD Jan 14 23:44:39.014000 audit[4153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc74184e8 a2=94 a3=30 items=0 ppid=3949 pid=4153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:44:39.023000 audit: BPF prog-id=200 op=LOAD Jan 14 23:44:39.023000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee25adc8 a2=98 a3=ffffee25adb8 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.023000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.023000 audit: BPF prog-id=200 op=UNLOAD Jan 14 23:44:39.023000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffee25ad98 a3=0 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.023000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.024000 audit: BPF prog-id=201 op=LOAD Jan 14 23:44:39.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee25aa58 a2=74 a3=95 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.024000 audit: BPF prog-id=201 op=UNLOAD Jan 14 23:44:39.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.024000 audit: BPF prog-id=202 op=LOAD Jan 14 23:44:39.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee25aab8 a2=94 a3=2 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.024000 audit: BPF prog-id=202 op=UNLOAD Jan 14 23:44:39.024000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.109022 containerd[1607]: time="2026-01-14T23:44:39.108837487Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:39.110512 containerd[1607]: time="2026-01-14T23:44:39.110446546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:39.110617 containerd[1607]: time="2026-01-14T23:44:39.110459027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:44:39.111064 kubelet[2831]: E0114 23:44:39.110808 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:39.111064 kubelet[2831]: E0114 23:44:39.110862 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:39.111163 kubelet[2831]: E0114 23:44:39.110982 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:39.112699 kubelet[2831]: E0114 23:44:39.112655 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:44:39.133000 audit: BPF prog-id=203 op=LOAD Jan 14 23:44:39.133000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee25aa78 a2=40 a3=ffffee25aaa8 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.133000 audit: BPF prog-id=203 op=UNLOAD Jan 14 23:44:39.133000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffee25aaa8 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.133000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.143000 audit: BPF prog-id=204 op=LOAD Jan 14 23:44:39.143000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee25aa88 a2=94 a3=4 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.143000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.143000 audit: BPF prog-id=204 op=UNLOAD Jan 14 23:44:39.143000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.143000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.144000 audit: BPF prog-id=205 op=LOAD Jan 14 23:44:39.144000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffee25a8c8 a2=94 a3=5 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.144000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.144000 audit: BPF prog-id=205 op=UNLOAD Jan 14 23:44:39.144000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.144000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.144000 audit: BPF prog-id=206 op=LOAD Jan 14 23:44:39.144000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee25aaf8 a2=94 a3=6 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.144000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.144000 audit: BPF prog-id=206 op=UNLOAD Jan 14 23:44:39.144000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.144000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.145000 audit: BPF prog-id=207 op=LOAD Jan 14 23:44:39.145000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee25a2c8 a2=94 a3=83 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.145000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.145000 audit: BPF prog-id=208 op=LOAD Jan 14 23:44:39.145000 audit[4157]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffee25a088 a2=94 a3=2 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.145000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.145000 audit: BPF prog-id=208 op=UNLOAD Jan 14 23:44:39.145000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.145000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.146000 audit: BPF prog-id=207 op=UNLOAD Jan 14 23:44:39.146000 audit[4157]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=36a0620 a3=3693b00 items=0 ppid=3949 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:44:39.157000 audit: BPF prog-id=199 op=UNLOAD Jan 14 23:44:39.157000 audit[3949]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=40008307c0 a2=0 a3=0 items=0 ppid=3939 pid=3949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.157000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 23:44:39.216000 audit[4181]: NETFILTER_CFG table=mangle:121 family=2 entries=16 op=nft_register_chain pid=4181 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.216000 audit[4181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd7413320 a2=0 a3=ffffb8508fa8 items=0 ppid=3949 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.216000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.222000 audit[4180]: NETFILTER_CFG table=nat:122 family=2 entries=15 op=nft_register_chain pid=4180 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.222000 audit[4180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe7e0e5c0 a2=0 a3=ffffbb36cfa8 items=0 ppid=3949 pid=4180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.222000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.231000 audit[4179]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4179 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.231000 audit[4179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd925aab0 a2=0 a3=ffffbc858fa8 items=0 ppid=3949 pid=4179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.231000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.234348 systemd-networkd[1498]: calid0745118252: Gained IPv6LL Jan 14 23:44:39.239000 audit[4185]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4185 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:39.239000 audit[4185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffdaa03bf0 a2=0 a3=ffff93801fa8 items=0 ppid=3949 pid=4185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.239000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:39.513992 kubelet[2831]: E0114 23:44:39.513916 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:44:39.612000 audit[4194]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:39.612000 audit[4194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcd2cad90 a2=0 a3=1 items=0 ppid=2980 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.612000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:39.617000 audit[4194]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4194 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:39.617000 audit[4194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcd2cad90 a2=0 a3=1 items=0 ppid=2980 pid=4194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:39.617000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:40.258661 systemd-networkd[1498]: vxlan.calico: Gained IPv6LL Jan 14 23:44:42.302117 containerd[1607]: time="2026-01-14T23:44:42.301875904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kh6gk,Uid:cced8c28-8577-4bc8-b036-c07227b38f48,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:42.436051 systemd-networkd[1498]: cali8d8d1b76292: Link UP Jan 14 23:44:42.437870 systemd-networkd[1498]: cali8d8d1b76292: Gained carrier Jan 14 23:44:42.455792 containerd[1607]: 2026-01-14 23:44:42.348 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0 csi-node-driver- calico-system cced8c28-8577-4bc8-b036-c07227b38f48 712 0 2026-01-14 23:44:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 csi-node-driver-kh6gk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8d8d1b76292 [] [] }} ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-" Jan 14 23:44:42.455792 containerd[1607]: 2026-01-14 23:44:42.348 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.455792 containerd[1607]: 2026-01-14 23:44:42.378 [INFO][4209] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" HandleID="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.379 [INFO][4209] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" HandleID="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b1a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"csi-node-driver-kh6gk", "timestamp":"2026-01-14 23:44:42.378804406 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.379 [INFO][4209] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.379 [INFO][4209] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.379 [INFO][4209] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.393 [INFO][4209] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.399 [INFO][4209] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.404 [INFO][4209] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.407 [INFO][4209] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456017 containerd[1607]: 2026-01-14 23:44:42.409 [INFO][4209] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.409 [INFO][4209] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.411 [INFO][4209] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.416 [INFO][4209] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.427 [INFO][4209] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.130/26] block=192.168.89.128/26 handle="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.427 [INFO][4209] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.130/26] handle="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.427 [INFO][4209] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:42.456888 containerd[1607]: 2026-01-14 23:44:42.427 [INFO][4209] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.130/26] IPv6=[] ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" HandleID="k8s-pod-network.04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.457314 containerd[1607]: 2026-01-14 23:44:42.430 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cced8c28-8577-4bc8-b036-c07227b38f48", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"csi-node-driver-kh6gk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d8d1b76292", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:42.457381 containerd[1607]: 2026-01-14 23:44:42.430 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.130/32] ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.457381 containerd[1607]: 2026-01-14 23:44:42.431 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8d8d1b76292 ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.457381 containerd[1607]: 2026-01-14 23:44:42.438 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.457847 containerd[1607]: 2026-01-14 23:44:42.439 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cced8c28-8577-4bc8-b036-c07227b38f48", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f", Pod:"csi-node-driver-kh6gk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.89.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8d8d1b76292", MAC:"aa:11:83:50:47:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:42.458000 containerd[1607]: 2026-01-14 23:44:42.451 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" Namespace="calico-system" Pod="csi-node-driver-kh6gk" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-csi--node--driver--kh6gk-eth0" Jan 14 23:44:42.480000 audit[4226]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:42.482785 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 14 23:44:42.482865 kernel: audit: type=1325 audit(1768434282.480:652): table=filter:127 family=2 entries=36 op=nft_register_chain pid=4226 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:42.480000 audit[4226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff65a32d0 a2=0 a3=ffffa61ddfa8 items=0 ppid=3949 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.485815 kernel: audit: type=1300 audit(1768434282.480:652): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffff65a32d0 a2=0 a3=ffffa61ddfa8 items=0 ppid=3949 pid=4226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.480000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:42.489250 kernel: audit: type=1327 audit(1768434282.480:652): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:42.493339 containerd[1607]: time="2026-01-14T23:44:42.493202782Z" level=info msg="connecting to shim 04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f" address="unix:///run/containerd/s/c3f7d3d790f31086d437d1d88727728b64edc0bc3a9a45003272a14d6368aa23" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:42.523030 systemd[1]: Started cri-containerd-04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f.scope - libcontainer container 04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f. Jan 14 23:44:42.543000 audit: BPF prog-id=209 op=LOAD Jan 14 23:44:42.544000 audit: BPF prog-id=210 op=LOAD Jan 14 23:44:42.546301 kernel: audit: type=1334 audit(1768434282.543:653): prog-id=209 op=LOAD Jan 14 23:44:42.546354 kernel: audit: type=1334 audit(1768434282.544:654): prog-id=210 op=LOAD Jan 14 23:44:42.544000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.544000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.551734 kernel: audit: type=1300 audit(1768434282.544:654): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.551913 kernel: audit: type=1327 audit(1768434282.544:654): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.545000 audit: BPF prog-id=210 op=UNLOAD Jan 14 23:44:42.545000 audit[4247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.555628 kernel: audit: type=1334 audit(1768434282.545:655): prog-id=210 op=UNLOAD Jan 14 23:44:42.555746 kernel: audit: type=1300 audit(1768434282.545:655): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.557907 kernel: audit: type=1327 audit(1768434282.545:655): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.545000 audit: BPF prog-id=211 op=LOAD Jan 14 23:44:42.545000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.548000 audit: BPF prog-id=212 op=LOAD Jan 14 23:44:42.548000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.548000 audit: BPF prog-id=212 op=UNLOAD Jan 14 23:44:42.548000 audit[4247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.548000 audit: BPF prog-id=211 op=UNLOAD Jan 14 23:44:42.548000 audit[4247]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.548000 audit: BPF prog-id=213 op=LOAD Jan 14 23:44:42.548000 audit[4247]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4236 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:42.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3034643037323638353932653039336236326561386337616434343063 Jan 14 23:44:42.575471 containerd[1607]: time="2026-01-14T23:44:42.575395513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kh6gk,Uid:cced8c28-8577-4bc8-b036-c07227b38f48,Namespace:calico-system,Attempt:0,} returns sandbox id \"04d07268592e093b62ea8c7ad440c7c04cdfef7115bdd3af7c7476ab79a72e6f\"" Jan 14 23:44:42.577845 containerd[1607]: time="2026-01-14T23:44:42.577764931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:44:42.916596 containerd[1607]: time="2026-01-14T23:44:42.915728552Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:42.917305 containerd[1607]: time="2026-01-14T23:44:42.917229159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:44:42.917470 containerd[1607]: time="2026-01-14T23:44:42.917367048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:42.917798 kubelet[2831]: E0114 23:44:42.917748 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:42.918578 kubelet[2831]: E0114 23:44:42.917958 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:42.919705 kubelet[2831]: E0114 23:44:42.919647 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:42.922126 containerd[1607]: time="2026-01-14T23:44:42.921640618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:44:43.301034 containerd[1607]: time="2026-01-14T23:44:43.300896804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-j7fj2,Uid:f86c2f11-4390-42f5-9590-40a5b08260db,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:43.301430 containerd[1607]: time="2026-01-14T23:44:43.301203662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-68w75,Uid:7928b74a-e68d-4722-9a18-4a12587c5970,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:43.422018 containerd[1607]: time="2026-01-14T23:44:43.421977253Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:43.424278 containerd[1607]: time="2026-01-14T23:44:43.424129137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:44:43.424560 containerd[1607]: time="2026-01-14T23:44:43.424229183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:43.425218 kubelet[2831]: E0114 23:44:43.425166 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:43.425295 kubelet[2831]: E0114 23:44:43.425231 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:43.425665 kubelet[2831]: E0114 23:44:43.425388 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:43.426938 kubelet[2831]: E0114 23:44:43.426823 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:43.472797 systemd-networkd[1498]: cali8ee83d07b31: Link UP Jan 14 23:44:43.472991 systemd-networkd[1498]: cali8ee83d07b31: Gained carrier Jan 14 23:44:43.496365 containerd[1607]: 2026-01-14 23:44:43.371 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0 goldmane-666569f655- calico-system 7928b74a-e68d-4722-9a18-4a12587c5970 804 0 2026-01-14 23:44:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 goldmane-666569f655-68w75 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8ee83d07b31 [] [] }} ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-" Jan 14 23:44:43.496365 containerd[1607]: 2026-01-14 23:44:43.371 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.496365 containerd[1607]: 2026-01-14 23:44:43.415 [INFO][4298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" HandleID="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Workload="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.415 [INFO][4298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" HandleID="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Workload="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"goldmane-666569f655-68w75", "timestamp":"2026-01-14 23:44:43.415097377 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.415 [INFO][4298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.415 [INFO][4298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.415 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.429 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.434 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.440 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.442 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497238 containerd[1607]: 2026-01-14 23:44:43.445 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.445 [INFO][4298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.447 [INFO][4298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.452 [INFO][4298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.459 [INFO][4298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.131/26] block=192.168.89.128/26 handle="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.459 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.131/26] handle="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.460 [INFO][4298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.497775 containerd[1607]: 2026-01-14 23:44:43.461 [INFO][4298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.131/26] IPv6=[] ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" HandleID="k8s-pod-network.4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Workload="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.497914 containerd[1607]: 2026-01-14 23:44:43.464 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7928b74a-e68d-4722-9a18-4a12587c5970", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"goldmane-666569f655-68w75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ee83d07b31", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.498693 containerd[1607]: 2026-01-14 23:44:43.465 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.131/32] ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.498693 containerd[1607]: 2026-01-14 23:44:43.465 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ee83d07b31 ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.498693 containerd[1607]: 2026-01-14 23:44:43.476 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.498774 containerd[1607]: 2026-01-14 23:44:43.477 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7928b74a-e68d-4722-9a18-4a12587c5970", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de", Pod:"goldmane-666569f655-68w75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.89.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8ee83d07b31", MAC:"ca:da:18:87:1c:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.498847 containerd[1607]: 2026-01-14 23:44:43.491 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" Namespace="calico-system" Pod="goldmane-666569f655-68w75" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-goldmane--666569f655--68w75-eth0" Jan 14 23:44:43.518000 audit[4321]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4321 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:43.518000 audit[4321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29220 a0=3 a1=fffff4f9d7f0 a2=0 a3=ffffae661fa8 items=0 ppid=3949 pid=4321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.518000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:43.528874 kubelet[2831]: E0114 23:44:43.528753 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:43.529755 containerd[1607]: time="2026-01-14T23:44:43.529555805Z" level=info msg="connecting to shim 4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de" address="unix:///run/containerd/s/e8fa314aeeaef25b1ee44d0781161a678de064a8a96c887b60a0d8bbe3c5fc31" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:43.594967 systemd[1]: Started cri-containerd-4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de.scope - libcontainer container 4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de. Jan 14 23:44:43.616226 systemd-networkd[1498]: calib42603fd276: Link UP Jan 14 23:44:43.616447 systemd-networkd[1498]: calib42603fd276: Gained carrier Jan 14 23:44:43.624000 audit: BPF prog-id=214 op=LOAD Jan 14 23:44:43.625000 audit: BPF prog-id=215 op=LOAD Jan 14 23:44:43.625000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.625000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.626000 audit: BPF prog-id=215 op=UNLOAD Jan 14 23:44:43.626000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.626000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.627000 audit: BPF prog-id=216 op=LOAD Jan 14 23:44:43.627000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.627000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.628000 audit: BPF prog-id=217 op=LOAD Jan 14 23:44:43.628000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.628000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.629000 audit: BPF prog-id=217 op=UNLOAD Jan 14 23:44:43.629000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.629000 audit: BPF prog-id=216 op=UNLOAD Jan 14 23:44:43.629000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.629000 audit: BPF prog-id=218 op=LOAD Jan 14 23:44:43.629000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=4329 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.629000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432343261663730366666636166663530343062653662626362623364 Jan 14 23:44:43.644280 containerd[1607]: 2026-01-14 23:44:43.379 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0 calico-apiserver-866b97bccb- calico-apiserver f86c2f11-4390-42f5-9590-40a5b08260db 809 0 2026-01-14 23:44:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866b97bccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 calico-apiserver-866b97bccb-j7fj2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib42603fd276 [] [] }} ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-" Jan 14 23:44:43.644280 containerd[1607]: 2026-01-14 23:44:43.379 [INFO][4281] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.644280 containerd[1607]: 2026-01-14 23:44:43.422 [INFO][4303] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" HandleID="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.423 [INFO][4303] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" HandleID="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"calico-apiserver-866b97bccb-j7fj2", "timestamp":"2026-01-14 23:44:43.42296931 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.423 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.460 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.460 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.535 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.559 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.572 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.576 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.644506 containerd[1607]: 2026-01-14 23:44:43.581 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.581 [INFO][4303] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.583 [INFO][4303] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99 Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.591 [INFO][4303] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.607 [INFO][4303] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.132/26] block=192.168.89.128/26 handle="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.608 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.132/26] handle="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.608 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:43.645025 containerd[1607]: 2026-01-14 23:44:43.608 [INFO][4303] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.132/26] IPv6=[] ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" HandleID="k8s-pod-network.af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.645157 containerd[1607]: 2026-01-14 23:44:43.611 [INFO][4281] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0", GenerateName:"calico-apiserver-866b97bccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86c2f11-4390-42f5-9590-40a5b08260db", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b97bccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"calico-apiserver-866b97bccb-j7fj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42603fd276", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.646248 containerd[1607]: 2026-01-14 23:44:43.611 [INFO][4281] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.132/32] ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.646248 containerd[1607]: 2026-01-14 23:44:43.611 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib42603fd276 ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.646248 containerd[1607]: 2026-01-14 23:44:43.616 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.646326 containerd[1607]: 2026-01-14 23:44:43.619 [INFO][4281] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0", GenerateName:"calico-apiserver-866b97bccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"f86c2f11-4390-42f5-9590-40a5b08260db", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b97bccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99", Pod:"calico-apiserver-866b97bccb-j7fj2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib42603fd276", MAC:"0a:9a:05:ba:36:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:43.646426 containerd[1607]: 2026-01-14 23:44:43.641 [INFO][4281] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-j7fj2" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--j7fj2-eth0" Jan 14 23:44:43.686088 containerd[1607]: time="2026-01-14T23:44:43.686039612Z" level=info msg="connecting to shim af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99" address="unix:///run/containerd/s/d0f1ebeb3280807add19368a7d536c66f154325dc5973f193b6fc328cbeb21e9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:43.685000 audit[4373]: NETFILTER_CFG table=filter:129 family=2 entries=54 op=nft_register_chain pid=4373 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:43.685000 audit[4373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29380 a0=3 a1=ffffe6a518c0 a2=0 a3=ffff97ec9fa8 items=0 ppid=3949 pid=4373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.685000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:43.688365 containerd[1607]: time="2026-01-14T23:44:43.688311623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-68w75,Uid:7928b74a-e68d-4722-9a18-4a12587c5970,Namespace:calico-system,Attempt:0,} returns sandbox id \"4242af706ffcaff5040be6bbcbb3d925129431d4de2df035a35f36f4acb654de\"" Jan 14 23:44:43.691767 containerd[1607]: time="2026-01-14T23:44:43.691645575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:44:43.717054 systemd[1]: Started cri-containerd-af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99.scope - libcontainer container af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99. Jan 14 23:44:43.733000 audit: BPF prog-id=219 op=LOAD Jan 14 23:44:43.733000 audit: BPF prog-id=220 op=LOAD Jan 14 23:44:43.733000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.733000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.734000 audit: BPF prog-id=220 op=UNLOAD Jan 14 23:44:43.734000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.734000 audit: BPF prog-id=221 op=LOAD Jan 14 23:44:43.734000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.734000 audit: BPF prog-id=222 op=LOAD Jan 14 23:44:43.734000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.734000 audit: BPF prog-id=222 op=UNLOAD Jan 14 23:44:43.734000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.735000 audit: BPF prog-id=221 op=UNLOAD Jan 14 23:44:43.735000 audit[4395]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.735000 audit: BPF prog-id=223 op=LOAD Jan 14 23:44:43.735000 audit[4395]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4383 pid=4395 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:43.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166336462376336366662376239626637653537633333383430643334 Jan 14 23:44:43.766069 containerd[1607]: time="2026-01-14T23:44:43.766019096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-j7fj2,Uid:f86c2f11-4390-42f5-9590-40a5b08260db,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af3db7c66fb7b9bf7e57c33840d34537932dfe3e0dd390e4c41eb562e5294a99\"" Jan 14 23:44:43.777885 systemd-networkd[1498]: cali8d8d1b76292: Gained IPv6LL Jan 14 23:44:44.025923 containerd[1607]: time="2026-01-14T23:44:44.025731022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:44.027827 containerd[1607]: time="2026-01-14T23:44:44.027745496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:44:44.027951 containerd[1607]: time="2026-01-14T23:44:44.027869463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:44.028192 kubelet[2831]: E0114 23:44:44.028152 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:44.028786 kubelet[2831]: E0114 23:44:44.028536 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:44.029111 containerd[1607]: time="2026-01-14T23:44:44.029008407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:44.030268 kubelet[2831]: E0114 23:44:44.030196 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbqjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:44.031600 kubelet[2831]: E0114 23:44:44.031534 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:44:44.301276 containerd[1607]: time="2026-01-14T23:44:44.301193426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-lgshp,Uid:a676c5b0-5016-4d83-8418-6221cf68e214,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:44:44.361880 containerd[1607]: time="2026-01-14T23:44:44.361822380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:44.363827 containerd[1607]: time="2026-01-14T23:44:44.363347947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:44.364068 containerd[1607]: time="2026-01-14T23:44:44.363398910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:44.364451 kubelet[2831]: E0114 23:44:44.364376 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:44.364451 kubelet[2831]: E0114 23:44:44.364425 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:44.366546 kubelet[2831]: E0114 23:44:44.364888 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx8dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:44.367909 kubelet[2831]: E0114 23:44:44.367876 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:44:44.439229 systemd-networkd[1498]: cali56178c37720: Link UP Jan 14 23:44:44.440114 systemd-networkd[1498]: cali56178c37720: Gained carrier Jan 14 23:44:44.456885 containerd[1607]: 2026-01-14 23:44:44.354 [INFO][4426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0 calico-apiserver-866b97bccb- calico-apiserver a676c5b0-5016-4d83-8418-6221cf68e214 810 0 2026-01-14 23:44:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866b97bccb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 calico-apiserver-866b97bccb-lgshp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali56178c37720 [] [] }} ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-" Jan 14 23:44:44.456885 containerd[1607]: 2026-01-14 23:44:44.354 [INFO][4426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.456885 containerd[1607]: 2026-01-14 23:44:44.386 [INFO][4439] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" HandleID="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.386 [INFO][4439] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" HandleID="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"calico-apiserver-866b97bccb-lgshp", "timestamp":"2026-01-14 23:44:44.386198161 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.386 [INFO][4439] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.386 [INFO][4439] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.386 [INFO][4439] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.396 [INFO][4439] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.401 [INFO][4439] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.407 [INFO][4439] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.410 [INFO][4439] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.457905 containerd[1607]: 2026-01-14 23:44:44.413 [INFO][4439] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.413 [INFO][4439] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.415 [INFO][4439] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.421 [INFO][4439] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.431 [INFO][4439] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.133/26] block=192.168.89.128/26 handle="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.431 [INFO][4439] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.133/26] handle="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.431 [INFO][4439] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:44.458265 containerd[1607]: 2026-01-14 23:44:44.432 [INFO][4439] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.133/26] IPv6=[] ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" HandleID="k8s-pod-network.76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.458439 containerd[1607]: 2026-01-14 23:44:44.434 [INFO][4426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0", GenerateName:"calico-apiserver-866b97bccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a676c5b0-5016-4d83-8418-6221cf68e214", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b97bccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"calico-apiserver-866b97bccb-lgshp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56178c37720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:44.458565 containerd[1607]: 2026-01-14 23:44:44.435 [INFO][4426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.133/32] ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.458565 containerd[1607]: 2026-01-14 23:44:44.435 [INFO][4426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali56178c37720 ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.458565 containerd[1607]: 2026-01-14 23:44:44.440 [INFO][4426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.458698 containerd[1607]: 2026-01-14 23:44:44.441 [INFO][4426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0", GenerateName:"calico-apiserver-866b97bccb-", Namespace:"calico-apiserver", SelfLink:"", UID:"a676c5b0-5016-4d83-8418-6221cf68e214", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866b97bccb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f", Pod:"calico-apiserver-866b97bccb-lgshp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.89.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali56178c37720", MAC:"56:a0:08:d1:b6:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:44.458796 containerd[1607]: 2026-01-14 23:44:44.453 [INFO][4426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" Namespace="calico-apiserver" Pod="calico-apiserver-866b97bccb-lgshp" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--apiserver--866b97bccb--lgshp-eth0" Jan 14 23:44:44.480652 containerd[1607]: time="2026-01-14T23:44:44.480154444Z" level=info msg="connecting to shim 76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f" address="unix:///run/containerd/s/7b0caefc9f31456d21467800f579008697a014a9ebe01291aa5f2a2b781fcbf5" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:44.479000 audit[4452]: NETFILTER_CFG table=filter:130 family=2 entries=45 op=nft_register_chain pid=4452 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:44.479000 audit[4452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=ffffe596c960 a2=0 a3=ffffb1e70fa8 items=0 ppid=3949 pid=4452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.479000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:44.515052 systemd[1]: Started cri-containerd-76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f.scope - libcontainer container 76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f. Jan 14 23:44:44.534000 audit: BPF prog-id=224 op=LOAD Jan 14 23:44:44.539000 audit: BPF prog-id=225 op=LOAD Jan 14 23:44:44.539000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.540000 audit: BPF prog-id=225 op=UNLOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.540000 audit: BPF prog-id=226 op=LOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.540000 audit: BPF prog-id=227 op=LOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.540000 audit: BPF prog-id=227 op=UNLOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.540000 audit: BPF prog-id=226 op=UNLOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.541466 kubelet[2831]: E0114 23:44:44.536402 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:44:44.540000 audit: BPF prog-id=228 op=LOAD Jan 14 23:44:44.540000 audit[4473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4461 pid=4473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3736643136333935303934323061613331343361343531653132663332 Jan 14 23:44:44.545875 kubelet[2831]: E0114 23:44:44.545838 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:44:44.546694 kubelet[2831]: E0114 23:44:44.546663 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:44.598000 audit[4495]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.598000 audit[4495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcafe8c60 a2=0 a3=1 items=0 ppid=2980 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.602000 audit[4495]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4495 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.602000 audit[4495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcafe8c60 a2=0 a3=1 items=0 ppid=2980 pid=4495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.622500 containerd[1607]: time="2026-01-14T23:44:44.622255413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866b97bccb-lgshp,Uid:a676c5b0-5016-4d83-8418-6221cf68e214,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"76d1639509420aa3143a451e12f323fda972b9674165124ffce67d99dfdcd80f\"" Jan 14 23:44:44.624379 containerd[1607]: time="2026-01-14T23:44:44.624129440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:44.628000 audit[4503]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.628000 audit[4503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffecd6b880 a2=0 a3=1 items=0 ppid=2980 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.633000 audit[4503]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4503 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:44.633000 audit[4503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffecd6b880 a2=0 a3=1 items=0 ppid=2980 pid=4503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:44.633000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:44.966014 containerd[1607]: time="2026-01-14T23:44:44.964706333Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:44.967377 containerd[1607]: time="2026-01-14T23:44:44.967237316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:44.967377 containerd[1607]: time="2026-01-14T23:44:44.967273558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:44.967682 kubelet[2831]: E0114 23:44:44.967629 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:44.967779 kubelet[2831]: E0114 23:44:44.967762 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:44.967986 kubelet[2831]: E0114 23:44:44.967944 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qchrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:44.969588 kubelet[2831]: E0114 23:44:44.969548 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:44:45.301218 containerd[1607]: time="2026-01-14T23:44:45.300978565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8479c65bc7-v2wlc,Uid:5ba12bc6-2080-48f6-9bf7-54c301828a15,Namespace:calico-system,Attempt:0,}" Jan 14 23:44:45.301437 containerd[1607]: time="2026-01-14T23:44:45.301408949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9dt4,Uid:f51854a3-fd90-4629-82ca-74bacf8c914d,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:45.459189 systemd-networkd[1498]: calibe3f01674f0: Link UP Jan 14 23:44:45.459891 systemd-networkd[1498]: calibe3f01674f0: Gained carrier Jan 14 23:44:45.480744 containerd[1607]: 2026-01-14 23:44:45.359 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0 calico-kube-controllers-8479c65bc7- calico-system 5ba12bc6-2080-48f6-9bf7-54c301828a15 808 0 2026-01-14 23:44:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8479c65bc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 calico-kube-controllers-8479c65bc7-v2wlc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibe3f01674f0 [] [] }} ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-" Jan 14 23:44:45.480744 containerd[1607]: 2026-01-14 23:44:45.359 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.480744 containerd[1607]: 2026-01-14 23:44:45.400 [INFO][4529] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" HandleID="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.400 [INFO][4529] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" HandleID="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"calico-kube-controllers-8479c65bc7-v2wlc", "timestamp":"2026-01-14 23:44:45.39999709 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.400 [INFO][4529] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.400 [INFO][4529] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.400 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.412 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.419 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.425 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.427 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481188 containerd[1607]: 2026-01-14 23:44:45.431 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.431 [INFO][4529] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.433 [INFO][4529] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723 Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.440 [INFO][4529] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.449 [INFO][4529] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.134/26] block=192.168.89.128/26 handle="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.449 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.134/26] handle="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.449 [INFO][4529] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:45.481386 containerd[1607]: 2026-01-14 23:44:45.449 [INFO][4529] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.134/26] IPv6=[] ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" HandleID="k8s-pod-network.f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Workload="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.482241 containerd[1607]: 2026-01-14 23:44:45.454 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0", GenerateName:"calico-kube-controllers-8479c65bc7-", Namespace:"calico-system", SelfLink:"", UID:"5ba12bc6-2080-48f6-9bf7-54c301828a15", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8479c65bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"calico-kube-controllers-8479c65bc7-v2wlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe3f01674f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:45.482304 containerd[1607]: 2026-01-14 23:44:45.455 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.134/32] ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.482304 containerd[1607]: 2026-01-14 23:44:45.455 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe3f01674f0 ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.482304 containerd[1607]: 2026-01-14 23:44:45.460 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.482367 containerd[1607]: 2026-01-14 23:44:45.462 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0", GenerateName:"calico-kube-controllers-8479c65bc7-", Namespace:"calico-system", SelfLink:"", UID:"5ba12bc6-2080-48f6-9bf7-54c301828a15", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 44, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8479c65bc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723", Pod:"calico-kube-controllers-8479c65bc7-v2wlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.89.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibe3f01674f0", MAC:"62:29:ce:b6:de:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:45.482413 containerd[1607]: 2026-01-14 23:44:45.478 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" Namespace="calico-system" Pod="calico-kube-controllers-8479c65bc7-v2wlc" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-calico--kube--controllers--8479c65bc7--v2wlc-eth0" Jan 14 23:44:45.506754 systemd-networkd[1498]: cali8ee83d07b31: Gained IPv6LL Jan 14 23:44:45.507027 systemd-networkd[1498]: calib42603fd276: Gained IPv6LL Jan 14 23:44:45.511000 audit[4554]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:45.511000 audit[4554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25976 a0=3 a1=ffffff213b10 a2=0 a3=ffffb728afa8 items=0 ppid=3949 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.511000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:45.527348 containerd[1607]: time="2026-01-14T23:44:45.527069340Z" level=info msg="connecting to shim f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723" address="unix:///run/containerd/s/41be037eaa91d676f878041d14b2a0243be859c41a6aa367df3e25a27db28aed" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:45.551710 kubelet[2831]: E0114 23:44:45.550165 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:44:45.551710 kubelet[2831]: E0114 23:44:45.550385 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:44:45.552918 kubelet[2831]: E0114 23:44:45.552848 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:44:45.589713 systemd[1]: Started cri-containerd-f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723.scope - libcontainer container f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723. Jan 14 23:44:45.618463 systemd-networkd[1498]: calic8d2ba99182: Link UP Jan 14 23:44:45.621191 systemd-networkd[1498]: calic8d2ba99182: Gained carrier Jan 14 23:44:45.641949 containerd[1607]: 2026-01-14 23:44:45.367 [INFO][4505] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0 coredns-668d6bf9bc- kube-system f51854a3-fd90-4629-82ca-74bacf8c914d 801 0 2026-01-14 23:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 coredns-668d6bf9bc-x9dt4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic8d2ba99182 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-" Jan 14 23:44:45.641949 containerd[1607]: 2026-01-14 23:44:45.367 [INFO][4505] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.641949 containerd[1607]: 2026-01-14 23:44:45.401 [INFO][4535] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" HandleID="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.402 [INFO][4535] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" HandleID="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3660), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"coredns-668d6bf9bc-x9dt4", "timestamp":"2026-01-14 23:44:45.401699305 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.402 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.449 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.450 [INFO][4535] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.515 [INFO][4535] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.547 [INFO][4535] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.560 [INFO][4535] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.564 [INFO][4535] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642367 containerd[1607]: 2026-01-14 23:44:45.573 [INFO][4535] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.574 [INFO][4535] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.577 [INFO][4535] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625 Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.589 [INFO][4535] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.599 [INFO][4535] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.135/26] block=192.168.89.128/26 handle="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.600 [INFO][4535] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.135/26] handle="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.600 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:45.642566 containerd[1607]: 2026-01-14 23:44:45.601 [INFO][4535] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.135/26] IPv6=[] ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" HandleID="k8s-pod-network.dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.604 [INFO][4505] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f51854a3-fd90-4629-82ca-74bacf8c914d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"coredns-668d6bf9bc-x9dt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2ba99182", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.604 [INFO][4505] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.135/32] ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.604 [INFO][4505] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic8d2ba99182 ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.620 [INFO][4505] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.626 [INFO][4505] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f51854a3-fd90-4629-82ca-74bacf8c914d", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625", Pod:"coredns-668d6bf9bc-x9dt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic8d2ba99182", MAC:"c2:b1:1c:53:19:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:45.643312 containerd[1607]: 2026-01-14 23:44:45.638 [INFO][4505] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" Namespace="kube-system" Pod="coredns-668d6bf9bc-x9dt4" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--x9dt4-eth0" Jan 14 23:44:45.646000 audit: BPF prog-id=229 op=LOAD Jan 14 23:44:45.647000 audit: BPF prog-id=230 op=LOAD Jan 14 23:44:45.647000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.647000 audit: BPF prog-id=230 op=UNLOAD Jan 14 23:44:45.647000 audit[4570]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.647000 audit: BPF prog-id=231 op=LOAD Jan 14 23:44:45.647000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.647000 audit: BPF prog-id=232 op=LOAD Jan 14 23:44:45.647000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.647000 audit: BPF prog-id=232 op=UNLOAD Jan 14 23:44:45.647000 audit[4570]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.647000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.648000 audit: BPF prog-id=231 op=UNLOAD Jan 14 23:44:45.648000 audit[4570]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.648000 audit: BPF prog-id=233 op=LOAD Jan 14 23:44:45.648000 audit[4570]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=4560 pid=4570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.648000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6631666237663532373338613438303639376561306139323337316464 Jan 14 23:44:45.650000 audit[4592]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:45.650000 audit[4592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffe30b4b0 a2=0 a3=1 items=0 ppid=2980 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.650000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:45.655000 audit[4592]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:45.655000 audit[4592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffe30b4b0 a2=0 a3=1 items=0 ppid=2980 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.655000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:45.690165 containerd[1607]: time="2026-01-14T23:44:45.689807420Z" level=info msg="connecting to shim dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625" address="unix:///run/containerd/s/da8cd66147c4f959d9bab2a71b502e20d77f07d45c2db0970c6077b1b425fff6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:45.694000 audit[4618]: NETFILTER_CFG table=filter:138 family=2 entries=54 op=nft_register_chain pid=4618 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:45.694000 audit[4618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26084 a0=3 a1=ffffd1090490 a2=0 a3=ffffb4ff3fa8 items=0 ppid=3949 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.694000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:45.704207 containerd[1607]: time="2026-01-14T23:44:45.703279852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8479c65bc7-v2wlc,Uid:5ba12bc6-2080-48f6-9bf7-54c301828a15,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1fb7f52738a480697ea0a92371dd92581a78421d3fdf44c43607ac17e95a723\"" Jan 14 23:44:45.706186 containerd[1607]: time="2026-01-14T23:44:45.706154372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:44:45.721000 systemd[1]: Started cri-containerd-dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625.scope - libcontainer container dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625. Jan 14 23:44:45.734000 audit: BPF prog-id=234 op=LOAD Jan 14 23:44:45.734000 audit: BPF prog-id=235 op=LOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.734000 audit: BPF prog-id=235 op=UNLOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.734000 audit: BPF prog-id=236 op=LOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.734000 audit: BPF prog-id=237 op=LOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.734000 audit: BPF prog-id=237 op=UNLOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.734000 audit: BPF prog-id=236 op=UNLOAD Jan 14 23:44:45.734000 audit[4628]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.734000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.735000 audit: BPF prog-id=238 op=LOAD Jan 14 23:44:45.735000 audit[4628]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4617 pid=4628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.735000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466626161306639373134393861626430313264663539303263333135 Jan 14 23:44:45.759311 containerd[1607]: time="2026-01-14T23:44:45.759238414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x9dt4,Uid:f51854a3-fd90-4629-82ca-74bacf8c914d,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625\"" Jan 14 23:44:45.770260 containerd[1607]: time="2026-01-14T23:44:45.770206466Z" level=info msg="CreateContainer within sandbox \"dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:44:45.781638 containerd[1607]: time="2026-01-14T23:44:45.781474495Z" level=info msg="Container 74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:45.791090 containerd[1607]: time="2026-01-14T23:44:45.790484197Z" level=info msg="CreateContainer within sandbox \"dfbaa0f971498abd012df5902c315e89778da23d6837c3acd82409bc04584625\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b\"" Jan 14 23:44:45.792513 containerd[1607]: time="2026-01-14T23:44:45.791274881Z" level=info msg="StartContainer for \"74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b\"" Jan 14 23:44:45.793398 containerd[1607]: time="2026-01-14T23:44:45.793286634Z" level=info msg="connecting to shim 74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b" address="unix:///run/containerd/s/da8cd66147c4f959d9bab2a71b502e20d77f07d45c2db0970c6077b1b425fff6" protocol=ttrpc version=3 Jan 14 23:44:45.823866 systemd[1]: Started cri-containerd-74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b.scope - libcontainer container 74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b. Jan 14 23:44:45.845000 audit: BPF prog-id=239 op=LOAD Jan 14 23:44:45.846000 audit: BPF prog-id=240 op=LOAD Jan 14 23:44:45.846000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.847000 audit: BPF prog-id=240 op=UNLOAD Jan 14 23:44:45.847000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.848000 audit: BPF prog-id=241 op=LOAD Jan 14 23:44:45.848000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.848000 audit: BPF prog-id=242 op=LOAD Jan 14 23:44:45.848000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.848000 audit: BPF prog-id=242 op=UNLOAD Jan 14 23:44:45.848000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.848000 audit: BPF prog-id=241 op=UNLOAD Jan 14 23:44:45.848000 audit[4654]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.848000 audit: BPF prog-id=243 op=LOAD Jan 14 23:44:45.848000 audit[4654]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4617 pid=4654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:45.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734663865323637386336373437616535323566663461663437313366 Jan 14 23:44:45.875463 containerd[1607]: time="2026-01-14T23:44:45.875423496Z" level=info msg="StartContainer for \"74f8e2678c6747ae525ff4af4713fa26c44d683be8d1c5a0fdf2f372f384324b\" returns successfully" Jan 14 23:44:46.040309 containerd[1607]: time="2026-01-14T23:44:46.040179457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:46.042351 containerd[1607]: time="2026-01-14T23:44:46.042284733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:44:46.042556 containerd[1607]: time="2026-01-14T23:44:46.042335256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:46.042974 kubelet[2831]: E0114 23:44:46.042862 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:46.043099 kubelet[2831]: E0114 23:44:46.043081 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:44:46.043410 kubelet[2831]: E0114 23:44:46.043333 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cdjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:46.044764 kubelet[2831]: E0114 23:44:46.044709 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:44:46.209883 systemd-networkd[1498]: cali56178c37720: Gained IPv6LL Jan 14 23:44:46.465972 systemd-networkd[1498]: calibe3f01674f0: Gained IPv6LL Jan 14 23:44:46.558328 kubelet[2831]: E0114 23:44:46.557860 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:44:46.558328 kubelet[2831]: E0114 23:44:46.558281 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:44:46.574316 kubelet[2831]: I0114 23:44:46.574224 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x9dt4" podStartSLOduration=47.574204026 podStartE2EDuration="47.574204026s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:46.572288321 +0000 UTC m=+54.396534601" watchObservedRunningTime="2026-01-14 23:44:46.574204026 +0000 UTC m=+54.398450306" Jan 14 23:44:46.598000 audit[4688]: NETFILTER_CFG table=filter:139 family=2 entries=20 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:46.598000 audit[4688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd8eb4f80 a2=0 a3=1 items=0 ppid=2980 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:46.598000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:46.607000 audit[4688]: NETFILTER_CFG table=nat:140 family=2 entries=14 op=nft_register_rule pid=4688 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:46.607000 audit[4688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd8eb4f80 a2=0 a3=1 items=0 ppid=2980 pid=4688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:46.607000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:46.850396 systemd-networkd[1498]: calic8d2ba99182: Gained IPv6LL Jan 14 23:44:47.301189 containerd[1607]: time="2026-01-14T23:44:47.300970249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hdwq,Uid:4dde8273-6048-49a9-af62-dad2628bc3c0,Namespace:kube-system,Attempt:0,}" Jan 14 23:44:47.458729 systemd-networkd[1498]: cali9662a07a259: Link UP Jan 14 23:44:47.459787 systemd-networkd[1498]: cali9662a07a259: Gained carrier Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.352 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0 coredns-668d6bf9bc- kube-system 4dde8273-6048-49a9-af62-dad2628bc3c0 811 0 2026-01-14 23:43:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-abf6d467b1 coredns-668d6bf9bc-2hdwq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9662a07a259 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.352 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.386 [INFO][4701] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" HandleID="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.386 [INFO][4701] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" HandleID="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-abf6d467b1", "pod":"coredns-668d6bf9bc-2hdwq", "timestamp":"2026-01-14 23:44:47.38670658 +0000 UTC"}, Hostname:"ci-4515-1-0-n-abf6d467b1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.386 [INFO][4701] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.387 [INFO][4701] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.387 [INFO][4701] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-abf6d467b1' Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.402 [INFO][4701] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.411 [INFO][4701] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.419 [INFO][4701] ipam/ipam.go 511: Trying affinity for 192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.422 [INFO][4701] ipam/ipam.go 158: Attempting to load block cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.426 [INFO][4701] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.89.128/26 host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.427 [INFO][4701] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.89.128/26 handle="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.429 [INFO][4701] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.437 [INFO][4701] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.89.128/26 handle="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.449 [INFO][4701] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.89.136/26] block=192.168.89.128/26 handle="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.449 [INFO][4701] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.89.136/26] handle="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" host="ci-4515-1-0-n-abf6d467b1" Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.449 [INFO][4701] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:44:47.483494 containerd[1607]: 2026-01-14 23:44:47.449 [INFO][4701] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.89.136/26] IPv6=[] ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" HandleID="k8s-pod-network.d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Workload="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.454 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4dde8273-6048-49a9-af62-dad2628bc3c0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"", Pod:"coredns-668d6bf9bc-2hdwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9662a07a259", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.454 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.89.136/32] ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.454 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9662a07a259 ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.459 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.460 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4dde8273-6048-49a9-af62-dad2628bc3c0", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 43, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-abf6d467b1", ContainerID:"d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d", Pod:"coredns-668d6bf9bc-2hdwq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.89.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9662a07a259", MAC:"46:89:dc:c5:d1:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:44:47.486278 containerd[1607]: 2026-01-14 23:44:47.479 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" Namespace="kube-system" Pod="coredns-668d6bf9bc-2hdwq" WorkloadEndpoint="ci--4515--1--0--n--abf6d467b1-k8s-coredns--668d6bf9bc--2hdwq-eth0" Jan 14 23:44:47.513944 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 14 23:44:47.514057 kernel: audit: type=1325 audit(1768434287.510:722): table=filter:141 family=2 entries=48 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:47.514092 kernel: audit: type=1300 audit(1768434287.510:722): arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=ffffff29ccf0 a2=0 a3=ffffbb012fa8 items=0 ppid=3949 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.510000 audit[4719]: NETFILTER_CFG table=filter:141 family=2 entries=48 op=nft_register_chain pid=4719 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:44:47.510000 audit[4719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22688 a0=3 a1=ffffff29ccf0 a2=0 a3=ffffbb012fa8 items=0 ppid=3949 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.510000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:47.518368 kernel: audit: type=1327 audit(1768434287.510:722): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:44:47.523783 containerd[1607]: time="2026-01-14T23:44:47.523743734Z" level=info msg="connecting to shim d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d" address="unix:///run/containerd/s/7ea92b6eee9b6bd59528109d9f1b3606e1ebcc7af89cbf4b5fa0074902bf8083" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:44:47.547884 systemd[1]: Started cri-containerd-d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d.scope - libcontainer container d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d. Jan 14 23:44:47.562992 kubelet[2831]: E0114 23:44:47.562795 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:44:47.573000 audit: BPF prog-id=244 op=LOAD Jan 14 23:44:47.576602 kernel: audit: type=1334 audit(1768434287.573:723): prog-id=244 op=LOAD Jan 14 23:44:47.576708 kernel: audit: type=1334 audit(1768434287.574:724): prog-id=245 op=LOAD Jan 14 23:44:47.574000 audit: BPF prog-id=245 op=LOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.579749 kernel: audit: type=1300 audit(1768434287.574:724): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.583304 kernel: audit: type=1327 audit(1768434287.574:724): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=245 op=UNLOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.587111 kernel: audit: type=1334 audit(1768434287.574:725): prog-id=245 op=UNLOAD Jan 14 23:44:47.587236 kernel: audit: type=1300 audit(1768434287.574:725): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.589559 kernel: audit: type=1327 audit(1768434287.574:725): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=246 op=LOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=247 op=LOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=247 op=UNLOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=246 op=UNLOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.574000 audit: BPF prog-id=248 op=LOAD Jan 14 23:44:47.574000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4728 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.574000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653264303864383130353463663537633638663865666464653134 Jan 14 23:44:47.626367 containerd[1607]: time="2026-01-14T23:44:47.626292657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2hdwq,Uid:4dde8273-6048-49a9-af62-dad2628bc3c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d\"" Jan 14 23:44:47.632427 containerd[1607]: time="2026-01-14T23:44:47.631543981Z" level=info msg="CreateContainer within sandbox \"d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:44:47.635000 audit[4765]: NETFILTER_CFG table=filter:142 family=2 entries=17 op=nft_register_rule pid=4765 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:47.635000 audit[4765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3a88da0 a2=0 a3=1 items=0 ppid=2980 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.635000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:47.640000 audit[4765]: NETFILTER_CFG table=nat:143 family=2 entries=35 op=nft_register_chain pid=4765 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:47.640000 audit[4765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc3a88da0 a2=0 a3=1 items=0 ppid=2980 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:47.646239 containerd[1607]: time="2026-01-14T23:44:47.646200176Z" level=info msg="Container 8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:44:47.656530 containerd[1607]: time="2026-01-14T23:44:47.656480254Z" level=info msg="CreateContainer within sandbox \"d2e2d08d81054cf57c68f8efdde14a5046535a504b7ce4d2ddb4458e1143bf6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152\"" Jan 14 23:44:47.658741 containerd[1607]: time="2026-01-14T23:44:47.658681974Z" level=info msg="StartContainer for \"8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152\"" Jan 14 23:44:47.659993 containerd[1607]: time="2026-01-14T23:44:47.659947042Z" level=info msg="connecting to shim 8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152" address="unix:///run/containerd/s/7ea92b6eee9b6bd59528109d9f1b3606e1ebcc7af89cbf4b5fa0074902bf8083" protocol=ttrpc version=3 Jan 14 23:44:47.691963 systemd[1]: Started cri-containerd-8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152.scope - libcontainer container 8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152. Jan 14 23:44:47.706000 audit: BPF prog-id=249 op=LOAD Jan 14 23:44:47.707000 audit: BPF prog-id=250 op=LOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=250 op=UNLOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=251 op=LOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=252 op=LOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=252 op=UNLOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=251 op=UNLOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.707000 audit: BPF prog-id=253 op=LOAD Jan 14 23:44:47.707000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4728 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:47.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3863663762333632313037373437383064373434663137396363656262 Jan 14 23:44:47.730384 containerd[1607]: time="2026-01-14T23:44:47.730244335Z" level=info msg="StartContainer for \"8cf7b36210774780d744f179ccebb5a8d8c63878c5a07ea50739bb07ea43b152\" returns successfully" Jan 14 23:44:48.599334 kubelet[2831]: I0114 23:44:48.599254 2831 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2hdwq" podStartSLOduration=49.599235494 podStartE2EDuration="49.599235494s" podCreationTimestamp="2026-01-14 23:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:44:48.59915785 +0000 UTC m=+56.423404130" watchObservedRunningTime="2026-01-14 23:44:48.599235494 +0000 UTC m=+56.423481774" Jan 14 23:44:48.622000 audit[4800]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:48.622000 audit[4800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe50713a0 a2=0 a3=1 items=0 ppid=2980 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:48.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:48.627000 audit[4800]: NETFILTER_CFG table=nat:145 family=2 entries=44 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:48.627000 audit[4800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe50713a0 a2=0 a3=1 items=0 ppid=2980 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:48.627000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:48.641850 systemd-networkd[1498]: cali9662a07a259: Gained IPv6LL Jan 14 23:44:49.613000 audit[4808]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:49.613000 audit[4808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe0d809e0 a2=0 a3=1 items=0 ppid=2980 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:49.613000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:49.630000 audit[4808]: NETFILTER_CFG table=nat:147 family=2 entries=56 op=nft_register_chain pid=4808 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:44:49.630000 audit[4808]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe0d809e0 a2=0 a3=1 items=0 ppid=2980 pid=4808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:49.630000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:44:54.309542 containerd[1607]: time="2026-01-14T23:44:54.309464093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:44:54.648818 containerd[1607]: time="2026-01-14T23:44:54.648136720Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:54.651032 containerd[1607]: time="2026-01-14T23:44:54.650949261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:44:54.651852 containerd[1607]: time="2026-01-14T23:44:54.650984663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:54.652306 kubelet[2831]: E0114 23:44:54.652246 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:54.652809 kubelet[2831]: E0114 23:44:54.652314 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:44:54.652809 kubelet[2831]: E0114 23:44:54.652435 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71d58048d3ba4bd9a20f57135d4493c5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:54.655656 containerd[1607]: time="2026-01-14T23:44:54.655471568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:44:54.995696 containerd[1607]: time="2026-01-14T23:44:54.995546345Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:54.996901 containerd[1607]: time="2026-01-14T23:44:54.996841690Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:44:54.997164 containerd[1607]: time="2026-01-14T23:44:54.996879372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:54.997204 kubelet[2831]: E0114 23:44:54.997119 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:54.997204 kubelet[2831]: E0114 23:44:54.997172 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:44:54.997370 kubelet[2831]: E0114 23:44:54.997300 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:54.998871 kubelet[2831]: E0114 23:44:54.998713 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:44:56.305069 containerd[1607]: time="2026-01-14T23:44:56.304945991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:56.638375 containerd[1607]: time="2026-01-14T23:44:56.638223418Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:56.640263 containerd[1607]: time="2026-01-14T23:44:56.640099351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:56.640263 containerd[1607]: time="2026-01-14T23:44:56.640215756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:56.640499 kubelet[2831]: E0114 23:44:56.640442 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:56.641178 kubelet[2831]: E0114 23:44:56.640540 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:56.641178 kubelet[2831]: E0114 23:44:56.640847 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx8dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:56.641675 containerd[1607]: time="2026-01-14T23:44:56.641154923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:44:56.641969 kubelet[2831]: E0114 23:44:56.641930 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:44:56.983220 containerd[1607]: time="2026-01-14T23:44:56.982877806Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:56.985315 containerd[1607]: time="2026-01-14T23:44:56.984917347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:44:56.985315 containerd[1607]: time="2026-01-14T23:44:56.985044513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:56.985522 kubelet[2831]: E0114 23:44:56.985402 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:56.985522 kubelet[2831]: E0114 23:44:56.985480 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:44:56.985712 kubelet[2831]: E0114 23:44:56.985647 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbqjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:56.987814 kubelet[2831]: E0114 23:44:56.987729 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:44:58.303638 containerd[1607]: time="2026-01-14T23:44:58.302644211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:44:58.644404 containerd[1607]: time="2026-01-14T23:44:58.644232142Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:58.646136 containerd[1607]: time="2026-01-14T23:44:58.645913161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:44:58.646493 containerd[1607]: time="2026-01-14T23:44:58.645986197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:58.646634 kubelet[2831]: E0114 23:44:58.646464 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:58.646634 kubelet[2831]: E0114 23:44:58.646538 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:44:58.647427 kubelet[2831]: E0114 23:44:58.646742 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:58.650153 containerd[1607]: time="2026-01-14T23:44:58.650099070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:44:58.994278 containerd[1607]: time="2026-01-14T23:44:58.994136414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:58.995665 containerd[1607]: time="2026-01-14T23:44:58.995608846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:44:58.995876 containerd[1607]: time="2026-01-14T23:44:58.995711320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:58.996002 kubelet[2831]: E0114 23:44:58.995881 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:58.996002 kubelet[2831]: E0114 23:44:58.995938 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:44:58.996198 kubelet[2831]: E0114 23:44:58.996077 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:58.997838 kubelet[2831]: E0114 23:44:58.997767 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:44:59.301823 containerd[1607]: time="2026-01-14T23:44:59.301754242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:44:59.665687 containerd[1607]: time="2026-01-14T23:44:59.665477766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:44:59.667911 containerd[1607]: time="2026-01-14T23:44:59.667819912Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:44:59.668151 containerd[1607]: time="2026-01-14T23:44:59.667979383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:44:59.668533 kubelet[2831]: E0114 23:44:59.668489 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:59.669499 kubelet[2831]: E0114 23:44:59.668554 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:44:59.669499 kubelet[2831]: E0114 23:44:59.669083 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qchrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:44:59.670541 kubelet[2831]: E0114 23:44:59.670492 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:45:02.313525 containerd[1607]: time="2026-01-14T23:45:02.313441358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:45:02.648352 containerd[1607]: time="2026-01-14T23:45:02.647783051Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:02.650557 containerd[1607]: time="2026-01-14T23:45:02.650335326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:45:02.651028 kubelet[2831]: E0114 23:45:02.650914 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:02.651510 containerd[1607]: time="2026-01-14T23:45:02.650410123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:02.651763 kubelet[2831]: E0114 23:45:02.651707 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:02.652092 kubelet[2831]: E0114 23:45:02.651959 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cdjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:02.653744 kubelet[2831]: E0114 23:45:02.653678 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:45:09.305068 kubelet[2831]: E0114 23:45:09.304161 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:45:10.303012 kubelet[2831]: E0114 23:45:10.301906 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:45:12.302787 kubelet[2831]: E0114 23:45:12.302727 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:45:12.305369 kubelet[2831]: E0114 23:45:12.304657 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:45:12.306313 kubelet[2831]: E0114 23:45:12.306256 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:45:14.303742 kubelet[2831]: E0114 23:45:14.303100 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:45:21.302306 containerd[1607]: time="2026-01-14T23:45:21.302261806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:45:21.649824 containerd[1607]: time="2026-01-14T23:45:21.649643179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:21.651058 containerd[1607]: time="2026-01-14T23:45:21.651015242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:45:21.651208 containerd[1607]: time="2026-01-14T23:45:21.651099481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:21.651435 kubelet[2831]: E0114 23:45:21.651392 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:45:21.652237 kubelet[2831]: E0114 23:45:21.651865 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:45:21.652237 kubelet[2831]: E0114 23:45:21.652164 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71d58048d3ba4bd9a20f57135d4493c5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:21.656477 containerd[1607]: time="2026-01-14T23:45:21.656406534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:45:21.991643 containerd[1607]: time="2026-01-14T23:45:21.991209745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:21.994667 containerd[1607]: time="2026-01-14T23:45:21.993524716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:45:21.994667 containerd[1607]: time="2026-01-14T23:45:21.993610475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:21.994830 kubelet[2831]: E0114 23:45:21.994098 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:45:21.994830 kubelet[2831]: E0114 23:45:21.994161 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:45:21.994830 kubelet[2831]: E0114 23:45:21.994335 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:21.995564 kubelet[2831]: E0114 23:45:21.995460 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:45:24.304148 containerd[1607]: time="2026-01-14T23:45:24.304085549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:45:24.644198 containerd[1607]: time="2026-01-14T23:45:24.643540621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:24.645328 containerd[1607]: time="2026-01-14T23:45:24.645249406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:45:24.645715 containerd[1607]: time="2026-01-14T23:45:24.645379885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:24.645816 kubelet[2831]: E0114 23:45:24.645717 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:45:24.645816 kubelet[2831]: E0114 23:45:24.645794 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:45:24.647445 kubelet[2831]: E0114 23:45:24.645980 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbqjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:24.647855 kubelet[2831]: E0114 23:45:24.647777 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:45:25.302662 containerd[1607]: time="2026-01-14T23:45:25.302556952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:45:25.634439 containerd[1607]: time="2026-01-14T23:45:25.633791708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:25.635862 containerd[1607]: time="2026-01-14T23:45:25.635798414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:45:25.636001 containerd[1607]: time="2026-01-14T23:45:25.635930053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:25.636318 kubelet[2831]: E0114 23:45:25.636246 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:25.636384 kubelet[2831]: E0114 23:45:25.636332 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:25.637451 kubelet[2831]: E0114 23:45:25.636526 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx8dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:25.638613 kubelet[2831]: E0114 23:45:25.638559 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:45:26.306049 containerd[1607]: time="2026-01-14T23:45:26.305990401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:45:26.650770 containerd[1607]: time="2026-01-14T23:45:26.650626079Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:26.653005 containerd[1607]: time="2026-01-14T23:45:26.652947025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:45:26.653209 containerd[1607]: time="2026-01-14T23:45:26.653112864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:26.653487 kubelet[2831]: E0114 23:45:26.653389 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:26.653487 kubelet[2831]: E0114 23:45:26.653475 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:45:26.655378 kubelet[2831]: E0114 23:45:26.655320 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qchrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:26.656701 kubelet[2831]: E0114 23:45:26.656651 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:45:27.302889 containerd[1607]: time="2026-01-14T23:45:27.302606974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:45:27.647691 containerd[1607]: time="2026-01-14T23:45:27.645506265Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:27.650612 containerd[1607]: time="2026-01-14T23:45:27.649637085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:45:27.650612 containerd[1607]: time="2026-01-14T23:45:27.649739484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:27.650946 kubelet[2831]: E0114 23:45:27.650897 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:45:27.651076 kubelet[2831]: E0114 23:45:27.651058 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:45:27.651645 kubelet[2831]: E0114 23:45:27.651591 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:27.653633 containerd[1607]: time="2026-01-14T23:45:27.653531626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:45:27.997906 containerd[1607]: time="2026-01-14T23:45:27.997764510Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:28.001146 containerd[1607]: time="2026-01-14T23:45:28.000947575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:45:28.001146 containerd[1607]: time="2026-01-14T23:45:28.001084415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:28.001637 kubelet[2831]: E0114 23:45:28.001571 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:45:28.002901 kubelet[2831]: E0114 23:45:28.001648 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:45:28.002901 kubelet[2831]: E0114 23:45:28.001774 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:28.004557 kubelet[2831]: E0114 23:45:28.003435 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:45:28.303424 containerd[1607]: time="2026-01-14T23:45:28.303219808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:45:28.643109 containerd[1607]: time="2026-01-14T23:45:28.642860461Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:45:28.645346 containerd[1607]: time="2026-01-14T23:45:28.645171733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:45:28.645346 containerd[1607]: time="2026-01-14T23:45:28.645288732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:45:28.645604 kubelet[2831]: E0114 23:45:28.645533 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:28.646089 kubelet[2831]: E0114 23:45:28.645605 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:45:28.646089 kubelet[2831]: E0114 23:45:28.645733 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cdjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:45:28.647606 kubelet[2831]: E0114 23:45:28.646844 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:45:34.310055 kubelet[2831]: E0114 23:45:34.309972 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:45:37.302389 kubelet[2831]: E0114 23:45:37.302338 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:45:38.302499 kubelet[2831]: E0114 23:45:38.302448 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:45:39.305774 kubelet[2831]: E0114 23:45:39.305709 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:45:40.302755 kubelet[2831]: E0114 23:45:40.302699 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:45:43.302646 kubelet[2831]: E0114 23:45:43.300619 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:45:46.306113 kubelet[2831]: E0114 23:45:46.306057 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:45:48.303865 kubelet[2831]: E0114 23:45:48.303752 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:45:50.307551 kubelet[2831]: E0114 23:45:50.307443 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:45:52.304405 kubelet[2831]: E0114 23:45:52.304227 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:45:53.301800 kubelet[2831]: E0114 23:45:53.301740 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:45:54.302168 kubelet[2831]: E0114 23:45:54.302077 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:46:00.304845 kubelet[2831]: E0114 23:46:00.304805 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:46:01.304733 kubelet[2831]: E0114 23:46:01.304650 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:46:03.303119 kubelet[2831]: E0114 23:46:03.302541 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:46:03.303119 kubelet[2831]: E0114 23:46:03.303030 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:46:05.302916 containerd[1607]: time="2026-01-14T23:46:05.302680799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:46:05.702682 containerd[1607]: time="2026-01-14T23:46:05.702465460Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:05.704354 containerd[1607]: time="2026-01-14T23:46:05.704210616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:46:05.704354 containerd[1607]: time="2026-01-14T23:46:05.704305298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:05.706396 kubelet[2831]: E0114 23:46:05.705744 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:46:05.706396 kubelet[2831]: E0114 23:46:05.705808 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:46:05.706396 kubelet[2831]: E0114 23:46:05.705948 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zbqjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-68w75_calico-system(7928b74a-e68d-4722-9a18-4a12587c5970): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:05.707532 kubelet[2831]: E0114 23:46:05.707481 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:46:08.302151 kubelet[2831]: E0114 23:46:08.301185 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:46:12.303827 containerd[1607]: time="2026-01-14T23:46:12.303780267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:46:12.649072 containerd[1607]: time="2026-01-14T23:46:12.648708663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:12.650797 containerd[1607]: time="2026-01-14T23:46:12.650648387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:46:12.650797 containerd[1607]: time="2026-01-14T23:46:12.650723589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:12.651031 kubelet[2831]: E0114 23:46:12.650987 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:46:12.651513 kubelet[2831]: E0114 23:46:12.651053 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:46:12.651513 kubelet[2831]: E0114 23:46:12.651205 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:71d58048d3ba4bd9a20f57135d4493c5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:12.655820 containerd[1607]: time="2026-01-14T23:46:12.655740104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:46:12.993970 containerd[1607]: time="2026-01-14T23:46:12.993747782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:12.995924 containerd[1607]: time="2026-01-14T23:46:12.995716547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:46:12.995924 containerd[1607]: time="2026-01-14T23:46:12.995767388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:12.996129 kubelet[2831]: E0114 23:46:12.995992 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:46:12.996129 kubelet[2831]: E0114 23:46:12.996044 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:46:12.996294 kubelet[2831]: E0114 23:46:12.996152 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dwn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-64f4bfdcf7-qqvhh_calico-system(462591b6-7c04-4bb8-91df-676e6e4e63fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:12.997921 kubelet[2831]: E0114 23:46:12.997839 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:46:14.305625 containerd[1607]: time="2026-01-14T23:46:14.305461331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:46:14.652324 containerd[1607]: time="2026-01-14T23:46:14.652185354Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:14.653643 containerd[1607]: time="2026-01-14T23:46:14.653599107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:46:14.653643 containerd[1607]: time="2026-01-14T23:46:14.653666749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:14.653864 kubelet[2831]: E0114 23:46:14.653823 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:14.654120 kubelet[2831]: E0114 23:46:14.653873 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:14.654120 kubelet[2831]: E0114 23:46:14.653987 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qchrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-lgshp_calico-apiserver(a676c5b0-5016-4d83-8418-6221cf68e214): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:14.655706 kubelet[2831]: E0114 23:46:14.655659 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:46:15.303143 containerd[1607]: time="2026-01-14T23:46:15.303100764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:46:15.632054 containerd[1607]: time="2026-01-14T23:46:15.631729089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:15.633752 containerd[1607]: time="2026-01-14T23:46:15.633596693Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:46:15.633752 containerd[1607]: time="2026-01-14T23:46:15.633607334Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:15.633916 kubelet[2831]: E0114 23:46:15.633866 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:15.633964 kubelet[2831]: E0114 23:46:15.633929 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:46:15.634660 kubelet[2831]: E0114 23:46:15.634423 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx8dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-866b97bccb-j7fj2_calico-apiserver(f86c2f11-4390-42f5-9590-40a5b08260db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:15.635808 kubelet[2831]: E0114 23:46:15.635765 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:46:16.301506 containerd[1607]: time="2026-01-14T23:46:16.301246063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:46:16.637251 containerd[1607]: time="2026-01-14T23:46:16.636942958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:16.638565 containerd[1607]: time="2026-01-14T23:46:16.638409073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:46:16.638565 containerd[1607]: time="2026-01-14T23:46:16.638510555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:16.639043 kubelet[2831]: E0114 23:46:16.638819 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:46:16.639043 kubelet[2831]: E0114 23:46:16.638870 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:46:16.639043 kubelet[2831]: E0114 23:46:16.638987 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:16.642516 containerd[1607]: time="2026-01-14T23:46:16.642293205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:46:16.965792 containerd[1607]: time="2026-01-14T23:46:16.965262397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:16.967492 containerd[1607]: time="2026-01-14T23:46:16.967381687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:46:16.967636 containerd[1607]: time="2026-01-14T23:46:16.967467369Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:16.967854 kubelet[2831]: E0114 23:46:16.967783 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:46:16.968038 kubelet[2831]: E0114 23:46:16.967925 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:46:16.968195 kubelet[2831]: E0114 23:46:16.968157 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l75kt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-kh6gk_calico-system(cced8c28-8577-4bc8-b036-c07227b38f48): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:16.969896 kubelet[2831]: E0114 23:46:16.969629 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:46:21.304816 kubelet[2831]: E0114 23:46:21.304753 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:46:22.302992 containerd[1607]: time="2026-01-14T23:46:22.302940658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:46:22.661759 containerd[1607]: time="2026-01-14T23:46:22.661622060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:46:22.663409 containerd[1607]: time="2026-01-14T23:46:22.663342343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:46:22.663748 containerd[1607]: time="2026-01-14T23:46:22.663377824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:46:22.663803 kubelet[2831]: E0114 23:46:22.663641 2831 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:46:22.663803 kubelet[2831]: E0114 23:46:22.663687 2831 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:46:22.664266 kubelet[2831]: E0114 23:46:22.664185 2831 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cdjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-8479c65bc7-v2wlc_calico-system(5ba12bc6-2080-48f6-9bf7-54c301828a15): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:46:22.665521 kubelet[2831]: E0114 23:46:22.665449 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:46:26.306517 kubelet[2831]: E0114 23:46:26.306446 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:46:27.857885 systemd[1]: Started sshd@7-49.13.216.16:22-68.220.241.50:58540.service - OpenSSH per-connection server daemon (68.220.241.50:58540). Jan 14 23:46:27.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-49.13.216.16:22-68.220.241.50:58540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:27.858862 kernel: kauditd_printk_skb: 55 callbacks suppressed Jan 14 23:46:27.859015 kernel: audit: type=1130 audit(1768434387.857:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-49.13.216.16:22-68.220.241.50:58540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:28.424000 audit[4952]: USER_ACCT pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.427125 sshd[4952]: Accepted publickey for core from 68.220.241.50 port 58540 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:28.427000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.431075 kernel: audit: type=1101 audit(1768434388.424:746): pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.431144 kernel: audit: type=1103 audit(1768434388.427:747): pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.429007 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:28.432968 kernel: audit: type=1006 audit(1768434388.427:748): pid=4952 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jan 14 23:46:28.427000 audit[4952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe54365a0 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:28.436628 kernel: audit: type=1300 audit(1768434388.427:748): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe54365a0 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:28.427000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:28.438376 kernel: audit: type=1327 audit(1768434388.427:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:28.440449 systemd-logind[1587]: New session 8 of user core. Jan 14 23:46:28.444171 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 23:46:28.447000 audit[4952]: USER_START pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.450000 audit[4955]: CRED_ACQ pid=4955 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.454167 kernel: audit: type=1105 audit(1768434388.447:749): pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.454296 kernel: audit: type=1103 audit(1768434388.450:750): pid=4955 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.853618 sshd[4955]: Connection closed by 68.220.241.50 port 58540 Jan 14 23:46:28.854674 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:28.859000 audit[4952]: USER_END pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.860000 audit[4952]: CRED_DISP pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.867917 kernel: audit: type=1106 audit(1768434388.859:751): pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.867995 kernel: audit: type=1104 audit(1768434388.860:752): pid=4952 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:28.867250 systemd[1]: sshd@7-49.13.216.16:22-68.220.241.50:58540.service: Deactivated successfully. Jan 14 23:46:28.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-49.13.216.16:22-68.220.241.50:58540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:28.876471 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 23:46:28.880330 systemd-logind[1587]: Session 8 logged out. Waiting for processes to exit. Jan 14 23:46:28.881870 systemd-logind[1587]: Removed session 8. Jan 14 23:46:29.302253 kubelet[2831]: E0114 23:46:29.302141 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:46:29.302759 kubelet[2831]: E0114 23:46:29.302541 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:46:30.303533 kubelet[2831]: E0114 23:46:30.303425 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:46:33.301325 kubelet[2831]: E0114 23:46:33.301216 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:46:33.972767 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:33.972920 kernel: audit: type=1130 audit(1768434393.969:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-49.13.216.16:22-68.220.241.50:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:33.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-49.13.216.16:22-68.220.241.50:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:33.969772 systemd[1]: Started sshd@8-49.13.216.16:22-68.220.241.50:42638.service - OpenSSH per-connection server daemon (68.220.241.50:42638). Jan 14 23:46:34.528000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.530230 sshd[4969]: Accepted publickey for core from 68.220.241.50 port 42638 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:34.532639 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:34.531000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.539058 kernel: audit: type=1101 audit(1768434394.528:755): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.539158 kernel: audit: type=1103 audit(1768434394.531:756): pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.541394 kernel: audit: type=1006 audit(1768434394.531:757): pid=4969 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 23:46:34.531000 audit[4969]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff592a770 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:34.544529 kernel: audit: type=1300 audit(1768434394.531:757): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff592a770 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:34.531000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:34.547982 kernel: audit: type=1327 audit(1768434394.531:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:34.549929 systemd-logind[1587]: New session 9 of user core. Jan 14 23:46:34.554868 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 23:46:34.559000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.563658 kernel: audit: type=1105 audit(1768434394.559:758): pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.564000 audit[4972]: CRED_ACQ pid=4972 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.567697 kernel: audit: type=1103 audit(1768434394.564:759): pid=4972 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.931384 sshd[4972]: Connection closed by 68.220.241.50 port 42638 Jan 14 23:46:34.933934 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:34.935000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.935000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.942193 kernel: audit: type=1106 audit(1768434394.935:760): pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.942811 kernel: audit: type=1104 audit(1768434394.935:761): pid=4969 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:34.942812 systemd[1]: sshd@8-49.13.216.16:22-68.220.241.50:42638.service: Deactivated successfully. Jan 14 23:46:34.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-49.13.216.16:22-68.220.241.50:42638 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:34.949046 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 23:46:34.951897 systemd-logind[1587]: Session 9 logged out. Waiting for processes to exit. Jan 14 23:46:34.955205 systemd-logind[1587]: Removed session 9. Jan 14 23:46:36.304611 kubelet[2831]: E0114 23:46:36.303487 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:46:37.302833 kubelet[2831]: E0114 23:46:37.302762 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:46:40.050659 systemd[1]: Started sshd@9-49.13.216.16:22-68.220.241.50:42652.service - OpenSSH per-connection server daemon (68.220.241.50:42652). Jan 14 23:46:40.054715 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:46:40.054831 kernel: audit: type=1130 audit(1768434400.050:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-49.13.216.16:22-68.220.241.50:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:40.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-49.13.216.16:22-68.220.241.50:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:40.302687 kubelet[2831]: E0114 23:46:40.301401 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:46:40.304234 kubelet[2831]: E0114 23:46:40.303001 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:46:40.613000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.616831 sshd[5010]: Accepted publickey for core from 68.220.241.50 port 42652 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:40.616000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.618999 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:40.620662 kernel: audit: type=1101 audit(1768434400.613:764): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.620723 kernel: audit: type=1103 audit(1768434400.616:765): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.622533 kernel: audit: type=1006 audit(1768434400.617:766): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 23:46:40.617000 audit[5010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe79cb9f0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:40.625417 kernel: audit: type=1300 audit(1768434400.617:766): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe79cb9f0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:40.617000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:40.626600 kernel: audit: type=1327 audit(1768434400.617:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:40.632244 systemd-logind[1587]: New session 10 of user core. Jan 14 23:46:40.637982 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 23:46:40.643000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.649706 kernel: audit: type=1105 audit(1768434400.643:767): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.650000 audit[5013]: CRED_ACQ pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:40.653620 kernel: audit: type=1103 audit(1768434400.650:768): pid=5013 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.021203 sshd[5013]: Connection closed by 68.220.241.50 port 42652 Jan 14 23:46:41.022142 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:41.023000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.023000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.028480 systemd[1]: sshd@9-49.13.216.16:22-68.220.241.50:42652.service: Deactivated successfully. Jan 14 23:46:41.028597 kernel: audit: type=1106 audit(1768434401.023:769): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.028630 kernel: audit: type=1104 audit(1768434401.023:770): pid=5010 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-49.13.216.16:22-68.220.241.50:42652 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:41.032133 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 23:46:41.036379 systemd-logind[1587]: Session 10 logged out. Waiting for processes to exit. Jan 14 23:46:41.038354 systemd-logind[1587]: Removed session 10. Jan 14 23:46:41.134559 systemd[1]: Started sshd@10-49.13.216.16:22-68.220.241.50:42666.service - OpenSSH per-connection server daemon (68.220.241.50:42666). Jan 14 23:46:41.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-49.13.216.16:22-68.220.241.50:42666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:41.703000 audit[5026]: USER_ACCT pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.704423 sshd[5026]: Accepted publickey for core from 68.220.241.50 port 42666 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:41.705000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.705000 audit[5026]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee5f19e0 a2=3 a3=0 items=0 ppid=1 pid=5026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:41.705000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:41.707339 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:41.713651 systemd-logind[1587]: New session 11 of user core. Jan 14 23:46:41.715783 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 23:46:41.718000 audit[5026]: USER_START pid=5026 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:41.720000 audit[5029]: CRED_ACQ pid=5029 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.123832 sshd[5029]: Connection closed by 68.220.241.50 port 42666 Jan 14 23:46:42.124217 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:42.127000 audit[5026]: USER_END pid=5026 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.127000 audit[5026]: CRED_DISP pid=5026 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.132753 systemd-logind[1587]: Session 11 logged out. Waiting for processes to exit. Jan 14 23:46:42.133018 systemd[1]: sshd@10-49.13.216.16:22-68.220.241.50:42666.service: Deactivated successfully. Jan 14 23:46:42.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-49.13.216.16:22-68.220.241.50:42666 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:42.136072 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 23:46:42.141267 systemd-logind[1587]: Removed session 11. Jan 14 23:46:42.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-49.13.216.16:22-68.220.241.50:42670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:42.227857 systemd[1]: Started sshd@11-49.13.216.16:22-68.220.241.50:42670.service - OpenSSH per-connection server daemon (68.220.241.50:42670). Jan 14 23:46:42.772000 audit[5039]: USER_ACCT pid=5039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.774727 sshd[5039]: Accepted publickey for core from 68.220.241.50 port 42670 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:42.774000 audit[5039]: CRED_ACQ pid=5039 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.774000 audit[5039]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6719f70 a2=3 a3=0 items=0 ppid=1 pid=5039 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:42.774000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:42.776316 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:42.785458 systemd-logind[1587]: New session 12 of user core. Jan 14 23:46:42.791858 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 23:46:42.793000 audit[5039]: USER_START pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:42.796000 audit[5046]: CRED_ACQ pid=5046 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:43.167615 sshd[5046]: Connection closed by 68.220.241.50 port 42670 Jan 14 23:46:43.167880 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:43.168000 audit[5039]: USER_END pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:43.169000 audit[5039]: CRED_DISP pid=5039 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:43.176087 systemd[1]: sshd@11-49.13.216.16:22-68.220.241.50:42670.service: Deactivated successfully. Jan 14 23:46:43.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-49.13.216.16:22-68.220.241.50:42670 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:43.179294 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 23:46:43.184208 systemd-logind[1587]: Session 12 logged out. Waiting for processes to exit. Jan 14 23:46:43.187083 systemd-logind[1587]: Removed session 12. Jan 14 23:46:43.302867 kubelet[2831]: E0114 23:46:43.302794 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:46:47.302724 kubelet[2831]: E0114 23:46:47.302271 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:46:48.294792 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 23:46:48.295027 kernel: audit: type=1130 audit(1768434408.290:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-49.13.216.16:22-68.220.241.50:54632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:48.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-49.13.216.16:22-68.220.241.50:54632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:48.292355 systemd[1]: Started sshd@12-49.13.216.16:22-68.220.241.50:54632.service - OpenSSH per-connection server daemon (68.220.241.50:54632). Jan 14 23:46:48.875000 audit[5063]: USER_ACCT pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.880790 sshd[5063]: Accepted publickey for core from 68.220.241.50 port 54632 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:48.885158 kernel: audit: type=1101 audit(1768434408.875:791): pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.885231 kernel: audit: type=1103 audit(1768434408.880:792): pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.880000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.885003 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:48.888748 kernel: audit: type=1006 audit(1768434408.883:793): pid=5063 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 14 23:46:48.892079 kernel: audit: type=1300 audit(1768434408.883:793): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee403010 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:48.883000 audit[5063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee403010 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:48.898250 kernel: audit: type=1327 audit(1768434408.883:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:48.883000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:48.899921 systemd-logind[1587]: New session 13 of user core. Jan 14 23:46:48.906428 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 23:46:48.909000 audit[5063]: USER_START pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.916855 kernel: audit: type=1105 audit(1768434408.909:794): pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.916000 audit[5066]: CRED_ACQ pid=5066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:48.920643 kernel: audit: type=1103 audit(1768434408.916:795): pid=5066 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.305524 sshd[5066]: Connection closed by 68.220.241.50 port 54632 Jan 14 23:46:49.307006 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:49.307000 audit[5063]: USER_END pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.307000 audit[5063]: CRED_DISP pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.316968 kernel: audit: type=1106 audit(1768434409.307:796): pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.317049 kernel: audit: type=1104 audit(1768434409.307:797): pid=5063 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.318473 systemd[1]: sshd@12-49.13.216.16:22-68.220.241.50:54632.service: Deactivated successfully. Jan 14 23:46:49.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-49.13.216.16:22-68.220.241.50:54632 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:49.321820 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 23:46:49.324857 systemd-logind[1587]: Session 13 logged out. Waiting for processes to exit. Jan 14 23:46:49.328223 systemd-logind[1587]: Removed session 13. Jan 14 23:46:49.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-49.13.216.16:22-68.220.241.50:54634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:49.423886 systemd[1]: Started sshd@13-49.13.216.16:22-68.220.241.50:54634.service - OpenSSH per-connection server daemon (68.220.241.50:54634). Jan 14 23:46:49.979000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.981879 sshd[5078]: Accepted publickey for core from 68.220.241.50 port 54634 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:49.982000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:49.982000 audit[5078]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff8a14280 a2=3 a3=0 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:49.982000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:49.984531 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:49.991526 systemd-logind[1587]: New session 14 of user core. Jan 14 23:46:49.997885 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 23:46:50.001000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:50.003000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:50.541001 sshd[5081]: Connection closed by 68.220.241.50 port 54634 Jan 14 23:46:50.541929 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:50.543000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:50.543000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:50.549624 systemd[1]: sshd@13-49.13.216.16:22-68.220.241.50:54634.service: Deactivated successfully. Jan 14 23:46:50.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-49.13.216.16:22-68.220.241.50:54634 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:50.554548 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 23:46:50.558337 systemd-logind[1587]: Session 14 logged out. Waiting for processes to exit. Jan 14 23:46:50.560712 systemd-logind[1587]: Removed session 14. Jan 14 23:46:50.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-49.13.216.16:22-68.220.241.50:54644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:50.648184 systemd[1]: Started sshd@14-49.13.216.16:22-68.220.241.50:54644.service - OpenSSH per-connection server daemon (68.220.241.50:54644). Jan 14 23:46:51.184000 audit[5091]: USER_ACCT pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:51.186073 sshd[5091]: Accepted publickey for core from 68.220.241.50 port 54644 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:51.185000 audit[5091]: CRED_ACQ pid=5091 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:51.185000 audit[5091]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2dfd990 a2=3 a3=0 items=0 ppid=1 pid=5091 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:51.185000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:51.188171 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:51.193746 systemd-logind[1587]: New session 15 of user core. Jan 14 23:46:51.198783 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 23:46:51.200000 audit[5091]: USER_START pid=5091 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:51.202000 audit[5094]: CRED_ACQ pid=5094 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:51.302605 kubelet[2831]: E0114 23:46:51.300822 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:46:51.304595 kubelet[2831]: E0114 23:46:51.304465 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:46:52.199000 audit[5104]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:52.199000 audit[5104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcadfe3f0 a2=0 a3=1 items=0 ppid=2980 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:52.202000 audit[5104]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5104 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:52.202000 audit[5104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcadfe3f0 a2=0 a3=1 items=0 ppid=2980 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:52.265000 audit[5106]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:52.265000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffffb7d93a0 a2=0 a3=1 items=0 ppid=2980 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:52.270000 audit[5106]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5106 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:52.270000 audit[5106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffb7d93a0 a2=0 a3=1 items=0 ppid=2980 pid=5106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.270000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:52.286176 sshd[5094]: Connection closed by 68.220.241.50 port 54644 Jan 14 23:46:52.287227 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:52.289000 audit[5091]: USER_END pid=5091 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.289000 audit[5091]: CRED_DISP pid=5091 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-49.13.216.16:22-68.220.241.50:54644 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:52.296242 systemd[1]: sshd@14-49.13.216.16:22-68.220.241.50:54644.service: Deactivated successfully. Jan 14 23:46:52.300271 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 23:46:52.304805 systemd-logind[1587]: Session 15 logged out. Waiting for processes to exit. Jan 14 23:46:52.308065 systemd-logind[1587]: Removed session 15. Jan 14 23:46:52.310955 kubelet[2831]: E0114 23:46:52.310697 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:46:52.401879 systemd[1]: Started sshd@15-49.13.216.16:22-68.220.241.50:54660.service - OpenSSH per-connection server daemon (68.220.241.50:54660). Jan 14 23:46:52.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-49.13.216.16:22-68.220.241.50:54660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:52.963000 audit[5113]: USER_ACCT pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.965052 sshd[5113]: Accepted publickey for core from 68.220.241.50 port 54660 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:52.965000 audit[5113]: CRED_ACQ pid=5113 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.965000 audit[5113]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe179ea90 a2=3 a3=0 items=0 ppid=1 pid=5113 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:52.965000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:52.968175 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:52.976818 systemd-logind[1587]: New session 16 of user core. Jan 14 23:46:52.982982 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 23:46:52.986000 audit[5113]: USER_START pid=5113 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:52.988000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:53.303522 kubelet[2831]: E0114 23:46:53.302812 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:46:53.507351 sshd[5116]: Connection closed by 68.220.241.50 port 54660 Jan 14 23:46:53.507920 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:53.509000 audit[5113]: USER_END pid=5113 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:53.513732 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 14 23:46:53.513817 kernel: audit: type=1106 audit(1768434413.509:827): pid=5113 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:53.514950 systemd[1]: sshd@15-49.13.216.16:22-68.220.241.50:54660.service: Deactivated successfully. Jan 14 23:46:53.509000 audit[5113]: CRED_DISP pid=5113 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:53.518395 kernel: audit: type=1104 audit(1768434413.509:828): pid=5113 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:53.520182 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 23:46:53.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-49.13.216.16:22-68.220.241.50:54660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:53.522449 kernel: audit: type=1131 audit(1768434413.515:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-49.13.216.16:22-68.220.241.50:54660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:53.523908 systemd-logind[1587]: Session 16 logged out. Waiting for processes to exit. Jan 14 23:46:53.525504 systemd-logind[1587]: Removed session 16. Jan 14 23:46:53.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-49.13.216.16:22-68.220.241.50:42836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:53.617799 systemd[1]: Started sshd@16-49.13.216.16:22-68.220.241.50:42836.service - OpenSSH per-connection server daemon (68.220.241.50:42836). Jan 14 23:46:53.625632 kernel: audit: type=1130 audit(1768434413.616:830): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-49.13.216.16:22-68.220.241.50:42836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:54.144000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.149163 sshd[5126]: Accepted publickey for core from 68.220.241.50 port 42836 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:46:54.148000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.150888 sshd-session[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:46:54.152434 kernel: audit: type=1101 audit(1768434414.144:831): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.152502 kernel: audit: type=1103 audit(1768434414.148:832): pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.154547 kernel: audit: type=1006 audit(1768434414.148:833): pid=5126 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 14 23:46:54.155860 kernel: audit: type=1300 audit(1768434414.148:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe314ed70 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:54.148000 audit[5126]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe314ed70 a2=3 a3=0 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:54.148000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:54.159521 kernel: audit: type=1327 audit(1768434414.148:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:46:54.162283 systemd-logind[1587]: New session 17 of user core. Jan 14 23:46:54.167014 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 23:46:54.169000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.177607 kernel: audit: type=1105 audit(1768434414.169:834): pid=5126 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.175000 audit[5129]: CRED_ACQ pid=5129 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.534809 sshd[5129]: Connection closed by 68.220.241.50 port 42836 Jan 14 23:46:54.536259 sshd-session[5126]: pam_unix(sshd:session): session closed for user core Jan 14 23:46:54.539000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.539000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:46:54.544207 systemd[1]: sshd@16-49.13.216.16:22-68.220.241.50:42836.service: Deactivated successfully. Jan 14 23:46:54.544000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-49.13.216.16:22-68.220.241.50:42836 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:54.548663 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 23:46:54.552976 systemd-logind[1587]: Session 17 logged out. Waiting for processes to exit. Jan 14 23:46:54.555057 systemd-logind[1587]: Removed session 17. Jan 14 23:46:55.302858 kubelet[2831]: E0114 23:46:55.302717 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:46:57.392000 audit[5140]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:57.392000 audit[5140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdb0d2950 a2=0 a3=1 items=0 ppid=2980 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:57.392000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:57.400000 audit[5140]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5140 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:57.400000 audit[5140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffdb0d2950 a2=0 a3=1 items=0 ppid=2980 pid=5140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:57.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:59.302787 kubelet[2831]: E0114 23:46:59.301927 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:46:59.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-49.13.216.16:22-68.220.241.50:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:46:59.655897 systemd[1]: Started sshd@17-49.13.216.16:22-68.220.241.50:42848.service - OpenSSH per-connection server daemon (68.220.241.50:42848). Jan 14 23:46:59.658275 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 14 23:46:59.658371 kernel: audit: type=1130 audit(1768434419.655:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-49.13.216.16:22-68.220.241.50:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:00.184000 audit[5142]: USER_ACCT pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.187267 sshd[5142]: Accepted publickey for core from 68.220.241.50 port 42848 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:00.187898 kernel: audit: type=1101 audit(1768434420.184:842): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.187000 audit[5142]: CRED_ACQ pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.188859 sshd-session[5142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:00.191721 kernel: audit: type=1103 audit(1768434420.187:843): pid=5142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.191772 kernel: audit: type=1006 audit(1768434420.187:844): pid=5142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 23:47:00.187000 audit[5142]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9042690 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:00.194271 kernel: audit: type=1300 audit(1768434420.187:844): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff9042690 a2=3 a3=0 items=0 ppid=1 pid=5142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:00.187000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:00.195935 kernel: audit: type=1327 audit(1768434420.187:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:00.201020 systemd-logind[1587]: New session 18 of user core. Jan 14 23:47:00.207861 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 23:47:00.212000 audit[5142]: USER_START pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.216000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.220019 kernel: audit: type=1105 audit(1768434420.212:845): pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.220233 kernel: audit: type=1103 audit(1768434420.216:846): pid=5145 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.556671 sshd[5145]: Connection closed by 68.220.241.50 port 42848 Jan 14 23:47:00.557280 sshd-session[5142]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:00.559000 audit[5142]: USER_END pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.560000 audit[5142]: CRED_DISP pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.564643 kernel: audit: type=1106 audit(1768434420.559:847): pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.564734 kernel: audit: type=1104 audit(1768434420.560:848): pid=5142 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:00.566834 systemd[1]: sshd@17-49.13.216.16:22-68.220.241.50:42848.service: Deactivated successfully. Jan 14 23:47:00.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-49.13.216.16:22-68.220.241.50:42848 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:00.572089 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 23:47:00.576688 systemd-logind[1587]: Session 18 logged out. Waiting for processes to exit. Jan 14 23:47:00.579485 systemd-logind[1587]: Removed session 18. Jan 14 23:47:03.300924 kubelet[2831]: E0114 23:47:03.300438 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:47:05.303679 kubelet[2831]: E0114 23:47:05.303562 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:47:05.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-49.13.216.16:22-68.220.241.50:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:05.678484 systemd[1]: Started sshd@18-49.13.216.16:22-68.220.241.50:58696.service - OpenSSH per-connection server daemon (68.220.241.50:58696). Jan 14 23:47:05.681328 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:47:05.681400 kernel: audit: type=1130 audit(1768434425.678:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-49.13.216.16:22-68.220.241.50:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:06.229000 audit[5159]: USER_ACCT pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.230407 sshd[5159]: Accepted publickey for core from 68.220.241.50 port 58696 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:06.232000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.233937 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:06.236413 kernel: audit: type=1101 audit(1768434426.229:851): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.236495 kernel: audit: type=1103 audit(1768434426.232:852): pid=5159 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.236527 kernel: audit: type=1006 audit(1768434426.232:853): pid=5159 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 23:47:06.237539 kernel: audit: type=1300 audit(1768434426.232:853): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0de4b50 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:06.232000 audit[5159]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0de4b50 a2=3 a3=0 items=0 ppid=1 pid=5159 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:06.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:06.242923 kernel: audit: type=1327 audit(1768434426.232:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:06.250872 systemd-logind[1587]: New session 19 of user core. Jan 14 23:47:06.255837 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 23:47:06.261000 audit[5159]: USER_START pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.265000 audit[5162]: CRED_ACQ pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.268614 kernel: audit: type=1105 audit(1768434426.261:854): pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.268672 kernel: audit: type=1103 audit(1768434426.265:855): pid=5162 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.608105 sshd[5162]: Connection closed by 68.220.241.50 port 58696 Jan 14 23:47:06.608814 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:06.611000 audit[5159]: USER_END pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.611000 audit[5159]: CRED_DISP pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.616010 kernel: audit: type=1106 audit(1768434426.611:856): pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.616087 kernel: audit: type=1104 audit(1768434426.611:857): pid=5159 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:06.617798 systemd-logind[1587]: Session 19 logged out. Waiting for processes to exit. Jan 14 23:47:06.618943 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 23:47:06.622185 systemd[1]: sshd@18-49.13.216.16:22-68.220.241.50:58696.service: Deactivated successfully. Jan 14 23:47:06.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-49.13.216.16:22-68.220.241.50:58696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:06.631193 systemd-logind[1587]: Removed session 19. Jan 14 23:47:07.302483 kubelet[2831]: E0114 23:47:07.302382 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:47:07.302483 kubelet[2831]: E0114 23:47:07.302479 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:47:07.304162 kubelet[2831]: E0114 23:47:07.304111 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:47:11.302083 kubelet[2831]: E0114 23:47:11.302007 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:47:11.726888 systemd[1]: Started sshd@19-49.13.216.16:22-68.220.241.50:58712.service - OpenSSH per-connection server daemon (68.220.241.50:58712). Jan 14 23:47:11.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-49.13.216.16:22-68.220.241.50:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:11.727606 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:47:11.727664 kernel: audit: type=1130 audit(1768434431.726:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-49.13.216.16:22-68.220.241.50:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:12.270000 audit[5197]: USER_ACCT pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.271578 sshd[5197]: Accepted publickey for core from 68.220.241.50 port 58712 ssh2: RSA SHA256:NQ8mNyV6Y14TPEfzINdN2BBDR6FPNAf+lPdyX5nlvG0 Jan 14 23:47:12.274000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.275243 sshd-session[5197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:47:12.276718 kernel: audit: type=1101 audit(1768434432.270:860): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.276803 kernel: audit: type=1103 audit(1768434432.274:861): pid=5197 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.279915 kernel: audit: type=1006 audit(1768434432.274:862): pid=5197 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 23:47:12.274000 audit[5197]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb4902c0 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:12.282230 kernel: audit: type=1300 audit(1768434432.274:862): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb4902c0 a2=3 a3=0 items=0 ppid=1 pid=5197 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:12.274000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:12.285849 kernel: audit: type=1327 audit(1768434432.274:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:47:12.288156 systemd-logind[1587]: New session 20 of user core. Jan 14 23:47:12.295116 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 23:47:12.298000 audit[5197]: USER_START pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.304300 kernel: audit: type=1105 audit(1768434432.298:863): pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.304381 kernel: audit: type=1103 audit(1768434432.303:864): pid=5200 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.303000 audit[5200]: CRED_ACQ pid=5200 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.659536 sshd[5200]: Connection closed by 68.220.241.50 port 58712 Jan 14 23:47:12.660800 sshd-session[5197]: pam_unix(sshd:session): session closed for user core Jan 14 23:47:12.662000 audit[5197]: USER_END pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.669676 systemd[1]: sshd@19-49.13.216.16:22-68.220.241.50:58712.service: Deactivated successfully. Jan 14 23:47:12.672639 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 23:47:12.663000 audit[5197]: CRED_DISP pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.677423 kernel: audit: type=1106 audit(1768434432.662:865): pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.677519 kernel: audit: type=1104 audit(1768434432.663:866): pid=5197 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 14 23:47:12.678119 systemd-logind[1587]: Session 20 logged out. Waiting for processes to exit. Jan 14 23:47:12.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-49.13.216.16:22-68.220.241.50:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:47:12.681671 systemd-logind[1587]: Removed session 20. Jan 14 23:47:15.302218 kubelet[2831]: E0114 23:47:15.301849 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:47:19.302174 kubelet[2831]: E0114 23:47:19.301287 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:47:19.304296 kubelet[2831]: E0114 23:47:19.304205 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa" Jan 14 23:47:20.303451 kubelet[2831]: E0114 23:47:20.303380 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-j7fj2" podUID="f86c2f11-4390-42f5-9590-40a5b08260db" Jan 14 23:47:20.303898 kubelet[2831]: E0114 23:47:20.303485 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kh6gk" podUID="cced8c28-8577-4bc8-b036-c07227b38f48" Jan 14 23:47:24.301473 kubelet[2831]: E0114 23:47:24.301280 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-68w75" podUID="7928b74a-e68d-4722-9a18-4a12587c5970" Jan 14 23:47:26.301714 kubelet[2831]: E0114 23:47:26.301545 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8479c65bc7-v2wlc" podUID="5ba12bc6-2080-48f6-9bf7-54c301828a15" Jan 14 23:47:27.910462 systemd[1]: cri-containerd-b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc.scope: Deactivated successfully. Jan 14 23:47:27.911438 systemd[1]: cri-containerd-b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc.scope: Consumed 34.807s CPU time, 119M memory peak. Jan 14 23:47:27.913000 audit: BPF prog-id=144 op=UNLOAD Jan 14 23:47:27.916066 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 23:47:27.916135 kernel: audit: type=1334 audit(1768434447.913:868): prog-id=144 op=UNLOAD Jan 14 23:47:27.916160 kernel: audit: type=1334 audit(1768434447.913:869): prog-id=148 op=UNLOAD Jan 14 23:47:27.913000 audit: BPF prog-id=148 op=UNLOAD Jan 14 23:47:27.918570 containerd[1607]: time="2026-01-14T23:47:27.918513414Z" level=info msg="received container exit event container_id:\"b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc\" id:\"b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc\" pid:3149 exit_status:1 exited_at:{seconds:1768434447 nanos:917141932}" Jan 14 23:47:27.946182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc-rootfs.mount: Deactivated successfully. Jan 14 23:47:28.037885 kubelet[2831]: I0114 23:47:28.037810 2831 scope.go:117] "RemoveContainer" containerID="b2500fea6778dcb71acf02388fd3279d8f9e313cf37172e1551e0b9279ef8dfc" Jan 14 23:47:28.048957 containerd[1607]: time="2026-01-14T23:47:28.048894305Z" level=info msg="CreateContainer within sandbox \"09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 23:47:28.064470 containerd[1607]: time="2026-01-14T23:47:28.063783884Z" level=info msg="Container 052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:28.070048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2676551897.mount: Deactivated successfully. Jan 14 23:47:28.074801 containerd[1607]: time="2026-01-14T23:47:28.074680139Z" level=info msg="CreateContainer within sandbox \"09bbc7c614b4bad748b999139d1bd75c8a9270b96a4644f01813b1282af07872\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b\"" Jan 14 23:47:28.075543 containerd[1607]: time="2026-01-14T23:47:28.075335359Z" level=info msg="StartContainer for \"052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b\"" Jan 14 23:47:28.078739 containerd[1607]: time="2026-01-14T23:47:28.078703383Z" level=info msg="connecting to shim 052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b" address="unix:///run/containerd/s/b0e190d94e4ff433c69c720286df0cd90b94d19c2e6f04c071af3d46372abc0d" protocol=ttrpc version=3 Jan 14 23:47:28.107992 systemd[1]: Started cri-containerd-052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b.scope - libcontainer container 052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b. Jan 14 23:47:28.125086 kernel: audit: type=1334 audit(1768434448.121:870): prog-id=254 op=LOAD Jan 14 23:47:28.125231 kernel: audit: type=1334 audit(1768434448.121:871): prog-id=255 op=LOAD Jan 14 23:47:28.121000 audit: BPF prog-id=254 op=LOAD Jan 14 23:47:28.121000 audit: BPF prog-id=255 op=LOAD Jan 14 23:47:28.121000 audit[5230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.132417 kernel: audit: type=1300 audit(1768434448.121:871): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.132500 kernel: audit: type=1327 audit(1768434448.121:871): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.122000 audit: BPF prog-id=255 op=UNLOAD Jan 14 23:47:28.122000 audit[5230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.133702 kernel: audit: type=1334 audit(1768434448.122:872): prog-id=255 op=UNLOAD Jan 14 23:47:28.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.139397 kernel: audit: type=1300 audit(1768434448.122:872): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.139432 kernel: audit: type=1327 audit(1768434448.122:872): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.122000 audit: BPF prog-id=256 op=LOAD Jan 14 23:47:28.122000 audit[5230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.140625 kernel: audit: type=1334 audit(1768434448.122:873): prog-id=256 op=LOAD Jan 14 23:47:28.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.122000 audit: BPF prog-id=257 op=LOAD Jan 14 23:47:28.122000 audit[5230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.122000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.124000 audit: BPF prog-id=257 op=UNLOAD Jan 14 23:47:28.124000 audit[5230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.124000 audit: BPF prog-id=256 op=UNLOAD Jan 14 23:47:28.124000 audit[5230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.124000 audit: BPF prog-id=258 op=LOAD Jan 14 23:47:28.124000 audit[5230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2881 pid=5230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:28.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035326234633062613562383063373338336431663763643437323730 Jan 14 23:47:28.158847 containerd[1607]: time="2026-01-14T23:47:28.158769327Z" level=info msg="StartContainer for \"052b4c0ba5b80c7383d1f7cd472701402eedad4a215cd499a64dfd48ac14f85b\" returns successfully" Jan 14 23:47:28.325244 kubelet[2831]: E0114 23:47:28.325035 2831 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52872->10.0.0.2:2379: read: connection timed out" Jan 14 23:47:28.762507 systemd[1]: cri-containerd-196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb.scope: Deactivated successfully. Jan 14 23:47:28.764703 systemd[1]: cri-containerd-196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb.scope: Consumed 4.252s CPU time, 64.7M memory peak, 2.8M read from disk. Jan 14 23:47:28.763000 audit: BPF prog-id=259 op=LOAD Jan 14 23:47:28.763000 audit: BPF prog-id=86 op=UNLOAD Jan 14 23:47:28.765000 audit: BPF prog-id=96 op=UNLOAD Jan 14 23:47:28.765000 audit: BPF prog-id=100 op=UNLOAD Jan 14 23:47:28.772519 containerd[1607]: time="2026-01-14T23:47:28.772365975Z" level=info msg="received container exit event container_id:\"196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb\" id:\"196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb\" pid:2658 exit_status:1 exited_at:{seconds:1768434448 nanos:771806078}" Jan 14 23:47:28.802518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb-rootfs.mount: Deactivated successfully. Jan 14 23:47:29.045328 kubelet[2831]: I0114 23:47:29.045247 2831 scope.go:117] "RemoveContainer" containerID="196821eaca69ad819f898a0d71f8e94824660efe6af136fa62f3bbd44172edbb" Jan 14 23:47:29.048009 containerd[1607]: time="2026-01-14T23:47:29.047953900Z" level=info msg="CreateContainer within sandbox \"68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 23:47:29.058732 containerd[1607]: time="2026-01-14T23:47:29.057498674Z" level=info msg="Container 2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:29.068483 containerd[1607]: time="2026-01-14T23:47:29.068430131Z" level=info msg="CreateContainer within sandbox \"68a346027e2efd09f6779aa94681d51500f86b92401e5a405c67f4e1754d7d04\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420\"" Jan 14 23:47:29.069438 containerd[1607]: time="2026-01-14T23:47:29.069369720Z" level=info msg="StartContainer for \"2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420\"" Jan 14 23:47:29.071922 containerd[1607]: time="2026-01-14T23:47:29.071867957Z" level=info msg="connecting to shim 2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420" address="unix:///run/containerd/s/1a6009986cf38bfc4108b968607fbf82fbdff7867d8fdb4643e4eeaa2f7d48cd" protocol=ttrpc version=3 Jan 14 23:47:29.099052 systemd[1]: Started cri-containerd-2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420.scope - libcontainer container 2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420. Jan 14 23:47:29.112000 audit: BPF prog-id=260 op=LOAD Jan 14 23:47:29.113000 audit: BPF prog-id=261 op=LOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=261 op=UNLOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=262 op=LOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=263 op=LOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=263 op=UNLOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=262 op=UNLOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.113000 audit: BPF prog-id=264 op=LOAD Jan 14 23:47:29.113000 audit[5273]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2549 pid=5273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:29.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336664643466663630656464363464323732316162663232376433 Jan 14 23:47:29.149004 containerd[1607]: time="2026-01-14T23:47:29.148960652Z" level=info msg="StartContainer for \"2c3fdd4ff60edd64d2721abf227d3595417e703297617052e6fac729b8e42420\" returns successfully" Jan 14 23:47:29.732998 kubelet[2831]: E0114 23:47:29.732836 2831 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52674->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-866b97bccb-lgshp.188abd9cf6d636d2 calico-apiserver 1680 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-866b97bccb-lgshp,UID:a676c5b0-5016-4d83-8418-6221cf68e214,APIVersion:v1,ResourceVersion:795,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-abf6d467b1,},FirstTimestamp:2026-01-14 23:44:45 +0000 UTC,LastTimestamp:2026-01-14 23:47:19.30121463 +0000 UTC m=+207.125460950,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-abf6d467b1,}" Jan 14 23:47:30.305436 kubelet[2831]: E0114 23:47:30.305369 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866b97bccb-lgshp" podUID="a676c5b0-5016-4d83-8418-6221cf68e214" Jan 14 23:47:30.693746 kubelet[2831]: I0114 23:47:30.693614 2831 status_manager.go:890] "Failed to get status for pod" podUID="feb4cdd236a024cac8c8a6c3cf81e26a" pod="kube-system/kube-apiserver-ci-4515-1-0-n-abf6d467b1" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:52788->10.0.0.2:2379: read: connection timed out" Jan 14 23:47:32.302772 kubelet[2831]: E0114 23:47:32.302623 2831 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-64f4bfdcf7-qqvhh" podUID="462591b6-7c04-4bb8-91df-676e6e4e63fa"